100th Anniversary Issue: Perspectives in telecommunications

225

Transcript of 100th Anniversary Issue: Perspectives in telecommunications

Page 1: 100th Anniversary Issue: Perspectives in telecommunications
Page 2: 100th Anniversary Issue: Perspectives in telecommunications

100th Anniversary Issue: Perspectives in telecommunications

1 Guest Editorial; Berit Svendsen, CTO Telenor

3 Enlightening 100 years of telecom development – From ‘Technical Information’ in 1904 to

‘Telektronikk’ in 2004; Per H Lehne, Telenor R&D

10 Why 1904? A time of rapid and radical technological change; Harald Rinde, BI

Section 1: Perspectives in telecommunications

13 Internet Protocol – Perspectives on consequences and benefits by introducing IP;

Terje Jensen, Telenor R&D

22 Personal communication fabrication in the Lyngen Alps;

Neil Gershenfeld and Manu Prakash, MIT

27 Perspectives on the dependability of networks and services; Bjarne E Helvik, NTNU

45 Vulnerability exposed: Telecommunications as a hub of society; Jan A Audestad, Telenor

55 Radio interface and access technologies for wireless and mobile systems beyond 3G;

Geir E Øien, NTNU

69 The adoption, use and social consequences of mobile communication; Rich Ling, Telenor R&D

82 How Telektronikk changed the Web; Håkon Wium Lie, Opera Software

Section 2: Selections from Norwegian telecom history

85 From radiotelegraphy to fibre technology – Telenor’s history and development on Svalbard;

Viggo Bjarne Kristiansen, Telenor Svalbard

97 INMARSAT – a success story! How it was established, Later developments,

The role of Telenor – former NTA; Ole Johan Haga, Telenor

113 Features of the Internet history – The Norwegian contribution to the development;

Paal Spilling and Yngvar Lundh, UNIK

134 Why and how Svalbard got the fibre; Rolf Skår, Norwegian Space Centre

140 Technical solution and implementation of the Svalbard fibre cable; Eirik Gjesteland, Telenor

Section 3: GSM – ideas, origin and milestones – a Norwegian perspective

153 The engagement of Televerket in the specification of GSM;

Bjørn Løken, Telenor Nordic Mobile

155 How it all began; Thomas Haug, Telia

159 My work in the GSM services area – An interview with Helene Sandberg;

Finn Trosby, Telenor Nordic Mobile

161 Wideband or narrow band? World championships in mobile radio in Paris 1986;

Torleiv Maseng, Norwegian Defence Research Establishment

165 GSM Working Party 2 – Towards a radio sub-system for GSM;

Rune Harald Rækken, Telenor R&D

170 The Mobile Application Part (MAP) of GSM; Jan A Audestad, Telenor

177 Signalling over the radio path; Knut Erik Walter, Telenor R&D

182 The 1987 European Speech Coding Championship; Jon Emil Natvig, Telenor R&D

187 SMS, the strange duckling of GSM; Finn Trosby, Telenor Nordic Mobile

195 The Norwegian GSM industrialisation – An idea that never took off;

Rune Harald Rækken, Telenor R&D

198 MOU of the GSM-MoU: Memorizing Old Undertakings of the GSM-Memorandum of

Understanding; Petter Bliksrud, Telenor Nordic Mobile

Section 4: From the archives

202 Introduction; Per H Lehne, Telenor R&D

203 The transatlantic telegraph cable of 1858 and other aspects of early telegraphy;

Per H Lehne, Telenor R&D

209 The second wireless in the world; Per H Lehne, Telenor R&D

214 Terms and acronyms

Contents

TelektronikkVolume 100 No. 3 – 2004

ISSN 0085-7130

Editor:

Per Hjalmar Lehne

(+47) 916 94 909

[email protected]

Editorial assistant:

Gunhild Luke

(+47) 415 14 125

[email protected]

Editorial office:

Telenor ASA

Telenor R&D

NO-1331 Fornebu

Norway

(+47) 810 77 000

[email protected]

www.telektronikk.com

Editorial board:

Berit Svendsen, CTO Telenor

Ole P. Håkonsen, Professor

Oddvar Hesjedal, Director

Bjørn Løken, Director

Graphic design:

Design Consult AS (Odd Andersen), Oslo

Layout and illustrations:

Gunhild Luke and Åse Aardal,

Telenor R&D

Prepress and printing:

Gan Grafisk, Oslo

Circulation:

4,000

Networks on networks

Connecting entities through networks – in

technological, societal and personal terms –

enables telecommunication. Networks occur on

different levels, form parts of larger networks,

and exist in numerous varieties. The artist Odd

Andersen visualises the networks on networks

by drawing interconnected lines with different

widths. Curved connections disturb the order

and show that networks are not regular but are

adapted to the communication needs.

Per H Lehne, Editor in Chief

Page 3: 100th Anniversary Issue: Perspectives in telecommunications

1

Telektronikk’s forerunner, TECHNICAL INFORMA-TION from the Telegraph Administration, appearedin April 1904, with the intention “to impart knowl-edge, to all personnel, of the State Telegraph Admin-istration technical installations, and to supply themore important news in the areas of telegraphic andtelephonic technology.”

This modest beginning took place at a time whentelecommunications was dominated by the telegraph.Since the establishment of the Norwegian TelegraphAdministration in 1855, the countrywide telegraphservices had in 1904 grown to be a necessary instru-ment, a must, in all kinds of commercial activities.The telephone service was in its early youth; itexisted mainly within the cities with poor coverage.Long distance telephone calls might be dispatchedonly on a few routes.

Radio communications had hardly come to Norway.The first trials with radiotelegraphy had taken placein 1903 in the northern part of the country.

There is a huge gap between the state of the art in1904 and the technology of today and the wide port-folio of advanced services which are now beingoffered. In a number of short articles in this issue,Telektronikk’s editor, Per H. Lehne, gives anoverview of the technology and how some eventshave been reflected in the journal. In all the yearsafter the introduction of the journal idealistic engi-neers and technologists have presented informationabout innovations in telecom with emphasis on appli-cations in Norway. In older issues you will find pre-sentations about the bigger paradigm shifts in tech-nology and new technical installations in the network.The journal has in that way kept up with the originalidea to update the personnel in the technical field.Looking back in the old volumes a historian may finddetailed information about the first long distance net-work made up of open-wire lines on poles, the devel-opment in telephony, the manual systems, conversionto automatic systems, automation of the long distancenetwork, the introduction – and the end – of the telexnetwork, the large scale deployment of micro wavesystems in the trunk network, satellite communica-tion systems, introduction of fibre technology, andnot to forget the digitalisation of the telecom network.

In 1967 the Research Institute was established as partof the Telegraph Administration and opened for an

influx of young and ambitious researchers, which ledto a new keenness in all professional operations of thecompany. A natural development was that this newpersonnel category gradually left their mark on thejournal and raised the quality of the content. Further-more, the journal could also serve as a channel toexternal readers, within and outside the country, andmight thus promote the company as a modern techno-logical enterprise. As a consequence articles writtenin English appeared in the journal.

The objective of promoting the company was evenmore emphasized when the state enterprise in 1994was converted to the private company Telenor(although fully state owned) and later introduced tothe stock exchange. A new editorial policy was intro-duced for the journal, with more systematic selectionof topics for each issue, aiming at readers both athome and abroad. As a consequence, it was decidedthat Telektronikk should have full production inEnglish.

So, what is ahead? A leading engineer and laterDirector of the Telegraph Administration, SverreRynning Tønnesen, formulated his vision for the tele-phone service in the 1930s as: “Anyone, located any-where, should at any time easily get access to a tele-phone, and be able to establish a telephone call withgood quality to any other person, located anywherein the world.”

Certainly, it can be said that this vision is a realitytoday. But this is not the end of the development andthe demands from the users. Mobile communicationhas revolutionized the possibility for any person to beaccessible at any time and wherever the person mightbe, not only for telephone conversations, but also forall other kinds of communication which may be real-ized face-to-face. The demand for portability andhigher capacity is increasing and must be expected toincrease furthermore in the years to come, as a pre-requisite to realizing new and enhanced services.While fixed-line technology generally offers highercapacity than mobile communication, mobile offersportability. You can easily predict that much creativeresearch may be invested in efforts to bridge this kindof gap between the two technologies.

The modern society of today depends on efficient andreliable telecommunications services to enable busi-ness operations and public services to society. Over

100 years

B E R I T S V E N D S E N

Berit Svendsen

is Executive Vice

President Tech-

nology / CTO of

Telenor ASA and

chairman of

Telenor R&D

Telektronikk 3.2004 ISSN 0085-7130 © Telenor ASA 2004

Page 4: 100th Anniversary Issue: Perspectives in telecommunications

2 Telektronikk 3.2004

the last two decades information technologies and theInternet have been transforming the way companiesdo business, the way students learn, the way the gov-ernment and communities provide services to the citi-zens, and communication between individuals. Digi-tal technologies have already proved to be a powerfulengine for economic growth. Some people claim thatit is a change no less than the Industrial Revolution ofthe 18th century. By boosting economic growth infor-mation and communication technologies have greatpotential for creating new and better jobs, and gener-ating greater prosperity.

On the road towards the knowledge-based societytoday’s vision for the future would be: “Any service,any terminal, anywhere, anytime, anyone”.

There is a continuing challenge for Telektronikk tomaintain the old intention, to continue to bring infor-mation about technology development, but now asthe leading Norwegian telecommunication journaland with a view to promoting Telenor as a profes-sional and outstanding company in the field oftelecommunications.

We hope you have enjoyed the presentations in thejournal up to now and invite you to future presenta-tions of exciting new developments in telecommuni-cations in the years to come.

Berit Svendsen is Executive Vice President Technology / CTO of Telenor ASA and chairman of Telenor R&D.

Since joining Telenor in 1988 Berit Svendsen has held various positions; among them Divisional Director of

Data-services and Project Director of FMC. Since February 2002 she has been a member of an advisory

group for the EU Commission for the entire ICT research in EU’s framework programme. She is also chair-

man of Simula Research Laboratory and Data-Respons ASA as well as a member of the Board of Directors

of Ekornes AS. Berit Svendsen holds an MSEE from the Norwegian University of Science and Technology

and a Master of Technology Management from the Norwegian University / Norwegian School of Business

Administration / MIT.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 5: 100th Anniversary Issue: Perspectives in telecommunications

3

1904 – The year it startedThe first issue of TEKNISKE MEDDELELSER fraTelegrafstyrelsen (TECHNICAL INFORMATIONfrom The Telegraph Administration) came in April1904.

We do not know what happened, or what was saidon the occasion. In another article in this Anniversaryissue of Telektronikk, Harald Rinde explains howit seems to be just one initiative of a broader setof reforms to make Telegrafverket more capable ofmeeting all the new technological changes that occurfrom the 1880s onwards [1].

The technological evolution had emerged rapidly onthe telegraph and telephone areas. Three years earlier,Marconi had demonstrated the wireless telegraphacross the Atlantic Ocean and Telegrafverket wasalready planning Norway’s first installation [2].

Jonas Severin Rasmussen, Telegraph Director from1892 to 1905, came from the position as Reader(Senior Lecturer) at the Norwegian Naval Collegeand his background was right for the time.

Rasmussen found a good man for the job as editor inMagne Hermod Petersen1). At the time Petersen wasHeadmaster at the Telegraph Administration’s train-ing Institute in Christiania2), but he had also beendoing pioneering work on wireless telegraphy. Infact, Hermod Petersen was a well-reputed expert onwireless communications. He had the educational aswell as the technological skill to fill the position aseditor. Later, he worked his way upward in the com-pany to become Head of the Radio Department from1920, and in charge of the Technical Departmentfrom 1931, and he finally himself became Director ofthe Telegraph Administration in 1935, a position heheld until 1938.

Enlightening 100 years of telecom developmentFrom ‘Technical Information’ in 1904 to ‘Telektronikk’ in 2004

P E R H . L E H N E

Per H. Lehne

is Research

Scientist at

Telenor R&D

and Editor in

Chief of

Telektronikk

Telektronikk 3.2004

When the Telegraph Director in 1904 initiated the monthly sheet TEKNISKE MEDDELELSER fra

Telegrafstyrelsen (Technical Information from the Telegraph Administration), it is doubtful that he

could imagine that it would exist 100 years later as the international journal ‘Telektronikk’, covering

virtually all aspects of modern telecommunications technology. The first issue in April 1904 consisted

of four pages starting with an article called ‘On the protection of telephone cables from power lines’,

an article spanning the first two issues. 100 volumes of the journal has proven to be a fantastic source

of Norwegian telecommunications history, written ‘by those who saw it happen’ and actually took part

in the development.

1904

The Norwegian National Assembly (the ‘Storting’) discusses the establishment of a separate Norwegian

consulate service. The underlying discussion was about whether Norway should have its own foreign policy under

the Swedish-Norwegian union, something that would drain the union of almost any content. A unilateral resolution

in the Storting in May 1905 eventually led to the dismantling of the union from June 7, 1905.

John William Strutt (Lord Rayleigh) receives the Nobel Prize in Physics “for his investigations of the densities of

the most important gases and for his discovery of argon in connection with these studies”. Lord Rayleigh is known to

wireless engineers for the ‘Rayleigh distribution’, which models the signal variations of a wireless transmission

subject to reflections and non-line of sight (Rayleigh fading). He also developed the theory of Rayleigh scattering,

which is the phenomenon giving us the blue sky, and also applies to light diffusion in optical fibres.

The distress signal ‘CQD’ is established. CQD (-.-. —.- -..) was the first standard international Morse code distress

signal and was originally proposed and adopted by Marconi on January 7. It only lasted until 1906, when the German

standard ‘SOS’-code was adopted.

John Ambrose Fleming invents the vacuum tube. The ‘thermionic valve’ was a diode valve able to rectify

alternating current. This is the pre-amble of the radio valve. Fleming had among other things also assisted Marconi

with his transatlantic experiment in 1901 [2].

1) In most other sources except Thorolf Rafto’s history of the State Telegraph of 1955, he is called only Hermod Petersen, which will beused throughout this article.

2) The Norwegian capital Oslo was named Christiania until 1925.

ISSN 0085-7130 © Telenor ASA 2004

Page 6: 100th Anniversary Issue: Perspectives in telecommunications

4 Telektronikk 3.2004

The first issue was introduced by a short statement:

The Telegraph Administration herewith producesthe first issue of the newssheet “Technical Informa-tion”. It is intended that this will impart knowl-edge, to all personnel, of the State Telegraph Tech-nical Installations, and will, in short, supply themore important news in the areas of telegraphicand telephonic technology. As a rule, the newssheetwill appear once a month, at the same time as themonthly circular.

In 1995, Telektronikk published an article in connec-tion with the journal’s 90th anniversary [3]. In thisarticle, Henrik Jørgensen goes through the develop-ment of the journal on a backdrop of how the StateTelegraph – ‘Statstelegrafen’, later ‘Telegrafverket’ –evolved into ‘Televerket’ – the Norwegian Telecom –and from 1995, Telenor, a modern telecommunica-tions company. Jørgensen’s article also contains com-prehensive biographies of the editors until then.

This article will review briefly the journal’s develop-ment, the main source for this being Jørgensen’sarticle from 1995.

Unstable start, 1904 – 1921In the first year, the production starts with monthlyissues. As mentioned, the first issue only had fourpages; however the full year gained 74 pages. Al-ready the next year, the production rate decreases tosix issues, and thereafter it ebbs away to only oneissue each year in 1908 and 1909. It then stopsaltogether until a new editor is appointed in 1916(Johannes Storstrøm). Hermod Petersen may havebeen too busy with the work on establishing SvalbardRadio, which opened in 1911, to be able to concen-trate on the production of the sheets.

The general situation until 1992 is that the position ofeditor is performed during his spare time. It is on theoccasion that the journal’s editorial office is trans-ferred to Televerket’s Research Institute (TF) that thischanges, and it becomes part of the regular work.

The first issue of ‘Technical Information from theTelegraph Administration’ appeared in April 1904.It consisted of four pages with an article On theprotection of telephones cables from power lines, anarticle which span the first two issues. The sheetsappeared monthly, and the whole of 1904 comprised74 pages, about half the size of one issue in 2004

(Magne) Hermod Petersen was the first editor of‘Technical Information’ from 1904 to1909. He had ateaching background as Headmaster of the TelegraphAdministration’s Training Institute in Christiania. Helater worked his way up to become Director of theTelegraph Administration in 1935

Editors of Technical Information and Telektronikk

Hermod Petersen 1904 – 1909

Johannes Storstrøm 1916 – 1921

Sverre Rynning Tønnesen 1927 – 1942

Julius Ringstad 1942 – 1957

Nils Taranger 1957 – 1978

Bjørn Sandnes 1978 – 1991

Ola Espvik 1991 – 2003

Per Hjalmar Lehne 2003 –

Pho

to:

Nor

weg

ian

Tel

ecom

Mus

eum

ISSN 0085-7130 © Telenor ASA 2004

Page 7: 100th Anniversary Issue: Perspectives in telecommunications

5Telektronikk 3.2004

The first years, the articles were anonymous, butfrom 1916 the writers’ names appeared. It is possiblefrom the context, subject and other sources howeverto draw some conclusions about the writers in theearly periods, as can be seen in this issue’s article onthe wireless telegraph in Lofoten in 1906 [2]. Theintention was probably that the sheets were to expressthe view and voice of the Administration, not thewriters.

Again in 1921, the production stops; however the lastissues in the production run were apparently printedin 1925. It seems that the reason for the journal’s dis-appearance was that it did not perform sufficiently[3], and that the choice of material was too biased.

Stable production starts in 1927After the second break, it seems that a more obviouseditorial control is introduced. Payment for contribu-tions is also introduced as an incentive.

The new editor in 1927, Sverre Rynning Tønnesen3),took the task seriously and declared that [2]

“In those intervening years when Technical Infor-mation did not appear, there have been significanttechnical developments in both fixed and wirelesstelegraphy and telephony.”

“In order to secure a common level of knowledgewithin our readership, we must lay the foundationfirst in order to build on them in future articles. Sowe must provide comprehensive explanations ofdevelopments within all the different branches oftelegraphic and telephonic technology, from theearliest beginnings to the later developments, aswell as providing other items of interest.”

Technical Information now came very regularly, evenduring the Second World War, which had very littleapparent effect on the journal.

For natural reasons, contributions in the first yearsafter the war were sparse as all qualified profession-als were fully occupied with rebuilding the telecom-munications networks.

At the end of the 1940s and into the 1950s, the jour-nal, edited by Julius Ringstad, starts receiving articlesfrom new sources. From this time employees fromTelegrafverket start going abroad on study tours.

We can early read about the transistor (1950), mobiletelephony (1950), Pulse Code Modulation (1951) andnew switching principles related to the telephoneexchanges ITT 8A4) (1953). The technical and opera-tional achievements of Telegrafverket during the VIth

Olympic Winter Games in Oslo in 1952 are of coursedescribed, and in 1955, a large centenary issue forTelegrafverket is published.

From Information to JournalThen in 1959, it was time for a name change. NilsTaranger had taken over as editor in 1957, and thejournal had become Telegrafverket’s TechnicalJournal.

The old name, Technical Information from the Tele-graph Administration was considered too long and

One of the first articles in 1904 treated the new standard telephonesintroduced by Telegrafverket in 1899. This was a model from LMEricsson, which from 1902 was manufactured by Elektrisk Bureau(EB) in Christiania

In 1927, payment is introduced for contributions to the journal

“For original work, which is assumed to per column 15.00 kroner

be based on the author’s own research”

“For articles based on both original work per column 10.00 kroner

and non-Norwegian sources”

“For items translated from non-Norwegian per column 7.50 kroner

sources”

3) Sverre Rynning Tønnesen later became Director of Telegrafverket from 1942 to 1962.4) The ITT 8A and 8B-switches were manufactured by Standard Telefon og Kabelfabrik A/S (STK), previously a part of Western

Electric. These were introduced in the 1950s and were the first combined mechanical-electronic switches in the Norwegian telephonenetwork. Telektronikk has over the years published several articles on the 8B-system.

Illu

stra

tion

from

‘T

echn

ical

Inf

orm

atio

n’, i

ssue

6, 1

904

ISSN 0085-7130 © Telenor ASA 2004

Page 8: 100th Anniversary Issue: Perspectives in telecommunications

6 Telektronikk 3.2004

cumbersome, and did not cover the journal’s contentswhich was no longer of a purely technical nature.Thus a name which was shorter and more up to thedemands of the time was necessary. This was noteasy to find, and a competition was launched amongthe employees of Telegrafverket. The winner was‘Telektronikk’. The name was considered a truly newword in the language. It contained ‘tele’, a word thatstarted to obtain a footing in the language, and ‘elek-tronikk’ (electronics), a term most common to the

readers. At the same time, the journal changedappearance and was produced with cover pages.

From now on, the journal expands to contain articlescommissioned from the editor as well as spontaneouscontributions. Important contributions are the regularproceedings from the biennial Norwegian TelephoneEngineers’ Conferences (Norsk Telefoningeniørmøte– NTIM) and also lectures and reports from Tele-grafverket’s District Engineers’ meetings.

When Bjørn Sandnes takes over the journal in 1978,it continues to present high quality articles whenTeleverket enters a new era. New telecom servicesappear (mobile telephone and data, paging, tele- andvideoconferencing, cable-TV, etc.), and are presentedin Telektronikk in the 1980s. Also, the new wind ofliberalisation leaves its mark, and the journal hasseveral articles on the subject from 1983 to 1986.

To some extent we see thematic issues appearingfrom time to time. The centenary of the telephone inNorway is the subject of a full issue in 1980; satellitecommunications is treated in 1981. In 1986 and 1987,when the important decisions are made on GSM (seealso the special section in this issue), several articlesappear in two issues describing the status as well astechnical input and decisions.

When Televerket’s Research Institute (TF, nowTelenor R&D) was created at the end of 1967, we seea new personnel group being the source of articles tothe journal. It was perhaps a logical evolvement thatthe journal’s editorial responsibility was moved herein 1992.

New conditionsTeleverket stood on the threshold of a total transfor-mation to become Telenor AS, a limited company,from 1995. This also changed the conditions. Beforethis, Televerket, as part of the Norwegian state,‘had no secrets’ and articles about long term plan-ning, budgets and detailed technical solutions couldbe published. Under the new constraints, the journalhad to change also. In 1992 it was handed over to theResearch Institute, which appointed Ola Espvik as thenew editor. His background was on traffic, reliabilityand operational control of telecommunications net-works, however, an equally important qualificationwas his experience as a Lecturer and Study Advisorin telecommunications and computer science atUNIK – the University of Oslo’s graduate StudyCentre at Kjeller, and later as the Director for theJoint College Centre at Kjeller

Thus, a new set of objectives were formulated [2]:

In 1950, an article from a study tour to USA the year before presentsboth mobile telephony and the newly invented transistor. The transistorinvented by Bardeen, Brattain and Schockley in 1947 was of the point-contact type. The original is on display at Bell Labs lobby exhibit, opento the public, at Lucent Technologies headquarters in Murray Hill, NJ,USA

Pho

to c

ourt

esy

of L

ucen

t Tec

hnol

ogie

s In

c./B

ell L

abs

The evolvement of cover pages from 1962 until 2004

ISSN 0085-7130 © Telenor ASA 2004

Page 9: 100th Anniversary Issue: Perspectives in telecommunications

7Telektronikk 3.2004

Telektronikk shall be the leading Norwegiantelecommunications journal.

Telektronikk shall through its choice of subjectsand presentation format contribute to synchroniz-ing professionals on the Norwegian telecommuni-cations scene with reference to the development oftelecommunications techniques.

The editorial line now was systematically trans-formed into a thematic form, which has provensuccessful for presenting features like satellite andmobile communications, traffic engineering, switch-ing, network planning, and much more, includingnon-technical issues like telework, user and socialaspects, and telemedicine. From now on, each issue isheaded by a feature editor appointed by the editor inchief.

From 1995, the journal is produced solely in Englishto meet the increasing internationalization. The edu-cational profile is improved and several issues arebeing used as lecturing material at e.g. the NorwegianUniversity of Science and Technology (NTH/NTNU).It is also increasingly positioning itself as a represen-tative of the entire Norwegian professional environ-ment. A pioneering event is also taking place in enter-ing electronic publishing. Already in 1993, a special

issue on ‘Cyberspace’ is transformed into the new web-format. According to Håkon Wium Lie, feature editorof the 4.1993 issue, this was a kind of green-fieldoperation, because a lot of the procedures for convert-ing documents into HTML and adding pictures andhyperlinks were not established at the time [4].

Already in 1964, two years after TELSTAR I, the first commercial communications satellite, was launched;Telektronikk brings an article about the new possibilities of telecommunications via satellite. The picture showsa projection on TELSTAR I’s first nine orbits, i.e. the first 24 hours. The broad lines show the parts of fourorbits, or passages, simultaneously visible from Andover, Maine in the USA and Plemeur-Bodou in France.Results from TELSTAR I even indicated that reception was possible down to an elevation angle of 0.5°, alreadypreparing the ground for the possibility of communication to high latitudes like e.g. Svalbard (approx. 3°) viasatellite

Illu

stra

tion

from

Tel

ektr

onik

k, is

sue

1-2,

196

4

In 1978, Telektronikk publishes two articles on theproject of developing a new standard telephone setfor the Norwegian telephone network. The ‘Tastafon’was the last such standard telephone. It introduced anew design and push buttons to enable new servicesin the telephone network. The picture shows theprototype, or ‘A’-model, which was delivered – in anumber of two – from Elektrisk Bureau to Televerketin August 1976

Illu

stra

tion

from

Tel

ektr

onik

k, is

sue

4, 1

978

ISSN 0085-7130 © Telenor ASA 2004

Page 10: 100th Anniversary Issue: Perspectives in telecommunications

8 Telektronikk 3.2004

It is under Espvik’s editorship the journal takes on itscurrent format, including a significant improvementof the layout. From 1992 until 2003, each issue has acover page designed to reflect the subject. The coverart is done by Odd Andersen, a well-reputed designerand artist with whom Ola Espvik collaborated closelyin order to bring out the ‘core’ of the subject in anabstract style. This process has truly produced aseries of high quality art works.

2004 and looking aheadIn 2003, Ola Espvik chose to withdraw from the posi-tion as editor of Telektronikk, and the writer of thisarticle was asked to take over the baton. Taking overthe responsibility of this project is both intimidatingand fascinating. During Telektronikk’s lifespan, therehave been moments demanding new thinking and anew course. Stabilizing the journal under better edito-rial control in 1927 was one of them, and the transfor-mation in 1992 into a journal for the Norwegian pro-

fessionals with an aim to reach an international audi-ence is another.

2004 does not demand a radical change. The objec-tives of 1992 are valid also today with the interpreta-tion that the journal also must embrace the new mem-bers of Telenor’s professional family around theworld, thus English is the only natural choice oflanguage.

The thematic form will continue, as it has shown bethe best format for making the journal issues usefuland interesting even after the immediate publicationdate. Themes will be chosen to demonstrate what arethe important topics and developments in telecommu-nications, as well as giving insight to new readers inbasic knowledge. Whether the current editor will suc-ceed in keeping the professional level high and alsobe able to present the important features at the righttimes, our readers of the future will have to judge.

The ‘Cyberspace’ issue (volume 89, No. 4, 1993) was the first Telektronikk to be put on the web in completeform, initiated by the feature editor, Håkon Wium Lie. It received an ‘Honourable Mention’ in the competition‘Best Document Design’ at the ‘Best of Web’94’

ISSN 0085-7130 © Telenor ASA 2004

Page 11: 100th Anniversary Issue: Perspectives in telecommunications

9Telektronikk 3.2004

The web version of Telektronikk from 1993 has beena stand-alone happening up till now. A web page hasexisted for quite some years, but only giving the tableof contents and the guest editorial of the issues from1992 and onwards. From 2004, we launch our new website where we will offer full article text availability.

In co-operation with our layout designer since 1992,Odd Andersen, we have also chosen to change thecover art into a permanent design, showing howtelecommunications is networking both society andtechnology on many levels.

We have used paper as the primary medium for 100years. It is my guess that it will take a long timebefore electronic media have the properties makingthem attractive and practically feasible to reallyreplace the paper for a journal like Telektronikk. Themost important benefits of paper are namely these:No batteries required, no decoding software needed,anywhere, anytime.

I started this article by postulating that the TelegraphDirector and editor in 1904 could not have foreseenwhat Telektronikk would evolve into during the first100 years. It is highly probable that they did notreflect on that at all. Knowing that it actually hasexisted for a century, it is easier to be struck by thethought of the next hundred. I can easily read theissues from 1904 and understand the contents, as wellas gain insight into the technology development,problems and solutions of the time. They also tell alot about how Telegrafverket, Televerket and Telenororganised work and prioritised technical challenges.They have truly ‘enlightened the history’.

Will the readers of 2104 be enlightened? Will Telek-tronikk exist at all in 2104? What will it look like?What will be the name?

The questions are many.

AcknowledgementsThe author of this article wants to thank the formereditors of the journal, Bjørn Sandnes and Ola Espvikfor interesting and valuable discussions and view-points on Telektronikk’s history and significance,as well as commenting on early versions of themanuscripts. Librarians Arve Nordsveen and AnneSolberg at the Norwegian Telecom Museum havealso been very helpful in giving access to the archivesof Telektronikk and providing additional informationon the subjects treated in this and other articles in thisissue.

References1 Rinde, H. Why 1904? A time of rapid and radical

technological change. Telektronikk, 100 (3),10–12, 2004. (This issue)

2 Lehne, P H. The second wireless in the world.Telektronikk, 100 (3), 209–213, 2004. (Thisissue).

3 Jørgensen, H. 90 years of Televerket’s technicaljournal – an overview. Telektronikk, 91 (4),193–210, 1995.

4 Lie, H W. How Telektronikk changed the Web.Telektronikk, 100 (3), 82-85, 2004. (This issue)

Other sourcesÅrhundrets Hvem Hva Hvor. Oslo, Schibsted, 2000.

Collet, J P, Lossius, B O H. Visjon – Forskning –Virkelighet. Televerkets Forskningsinstitutt 25 år.Kjeller, Norwegian Telecom Research (TelenorR&D), 1993.

Norwegian Telecom Museum (Norsk Telemuseum).September 22, 2004 [online] – URL:http://www.norsktele.museum.no/index_ieoff.html

Rafto, T. The history of the Norwegian State Tele-graph 1855–1955 (Telegrafverkets historie1855–1955). Oslo, Telegrafverket, 1955.

Per Hjalmar Lehne (46) is Research Scientist at Telenor R&D and Editor in Chief of Telektronikk. He

obtained his M.Sc. from the Norwegian Institute of Science and Technology in 1988. He has since been with

Telenor R&D working with different aspects of terrestrial mobile communications. His work since 1993 has

been in the area of radio propagation and access technolgy, especially on smart antennas for GSM and

UMTS. He has participated in several RACE, ACTS and IST projects as well as COST actions in the field. His

current interests are in the use of MIMO technology in terrestrial mobile and wireless networks and on

access network convergence, where he participates in the IST project FLOWS.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 12: 100th Anniversary Issue: Perspectives in telecommunications

10 Telektronikk 3.2004

IntroductionHistorically, the journal known today as Telektronikkstarted out in April, 1904, as a monthly publicationentitled Technical Information from the TelegraphAdministration. The new publication was distributedto the employees of the Norwegian state-owned tele-graph and telephone operator together with themonthly administrative circular from the manage-ment. To explain the origin of Telektronikk, then, wemust understand why the management of the Norwe-gian Telegraph Administration (NTA) decided to ini-tiate such a publication at just this time. In the follow-ing, I will argue that the initiative should be seen aspart of a broader shift around the turn of the century,when the old telegraph administration took importantsteps towards becoming a modern, knowledge-basedtelecom operator.

Sadly, no evidence has been uncovered in the NTAarchives on the specific background for the initiativethat led to the new publication. Neither does theTechnical Information series itself shed much lighton its origins. In the very first issue, it was statedbriefly that the purpose of the new series was tosupply employees with knowledge of the technicalequipment used by the NTA. Furthermore, the serieswould be used to pass on important news on thedevelopment within the field of telegraph and tele-phone technology. In short, the purpose of the Tech-nical Information series was to disseminate technicalinformation. Big surprise. But why did the task ofkeeping the staff up-to-date on technological issuesbecome important enough to warrant such a series atjust this time?

The situation around 1900In 1904, the Norwegian Telegraph Administrationhad been around for nearly 50 years without requiringsuch a series. Furthermore, the NTA was in a particu-

larly tight financial situation in the first five or sixyears after the turn of the century. In the aftermath ofthe crash on the Christiania (Oslo) stock exchange in1899 the Norwegian economy experienced a sharpcyclical downswing, slowing down the otherwiserapid growth in revenues from telegram and tele-phone traffic and forcing the National Assembly tocut back on its appropriation of grants to the NTA.Given these circumstances, it seems unlikely that theTechnical Information series may have been merelythe realisation of a stray idea developed within theNTA management. To justify the allocation of scarceresources in a time of cut-backs and budgetaryrestraints, there must have been a strong belief in theneed for such a publication within the organisation.

To understand how such a need could arise, it is nec-essary to take a closer look at the changes that theTelegraph Administration were going through aroundthe turn of the century. With the partial exception ofthe first few years of the new century, the period fromthe 1890s until 1920 was a time of unprecedentedexpansion for the NTA, both in terms of staff andtasks. The number of white collar employees rosefrom a modest 466 persons in 1895 to 1277 personsin 1905, and 4024 persons in 1920. If we add bluecollar workers, extras, and the privately employedstaff at the state telephone stations in rural areas, thefigures more than double.

This rapid expansion reflected the multiplying oftasks as the new telephone and wireless telegraphyservices were added to the NTA’s traditional task ofoperating the national telegraph system. As late as in1890 the NTA owned a total of only 26 telephones,and all telephone subscribers in the country belongedto private networks. From the mid-1890s, however, anew policy was initiated. The construction of anational long-distance telephone system began, andimportant local telephone networks in cities and

Why 1904?A time of rapid and radical technological change

H A R A L D R I N D E*)

Harald Rinde is

Researcher at

the Centre for

Business History

at the Norwegian

School of

Management, BI

Telektronikk’s predecessor originates in a period of rapid and radical technological change. The

Norwegian Telegraph Administration, the dominant Norwegian operator, had to adopt a new and

more systematic approach to the development of its knowledge base. Introducing Technical

Information from the Telegraph Administration was only one initiative among a broader set of

reforms. Right from the beginning, it became an important vehicle in the development of a

knowledge based telecom industry.

*) Co-author of a new three-volume History of Norwegian Telecommunications, to be published by Gyldendal Fakta in January, 2005.

ISSN 0085-7130 © Telenor ASA 2004

Page 13: 100th Anniversary Issue: Perspectives in telecommunications

11Telektronikk 3.2004

towns such as Trondheim (1898) and Oslo (1901)were taken over by the NTA. In addition, from 1901engineers at the central staff of the NTA were in-volved in the navy’s experiments with radio telegra-phy. Two years later, a civilian system of wirelesstelegraph stations was tested in Lofoten in the North-ern part of Norway, and in 1906 the first two suchstations were opened for regular service. By 1920,wireless telegraph stations had taken over the trans-mission of telegrams between Norway and Americaas well as the important new task of communicatingwith ships at sea.

The change from a small and focussed telegraphadministration to an expansive multi-task telecommu-nications operator enhanced the need for a more sys-tematic approach to the development of technicalskills and competence within the organisation. Formost of the nineteenth century, it would hardly beadequate to characterise the Telegraph Administra-tion as a knowledge-intensive institution. The job asa telegraph operator was considered a practically ori-ented occupation, and until 1898 the formal educationconsisted of a three month course programme,focussing mainly on the practical mastering of send-ing and receiving telegrams. Most middle managersin the NTA were recruited among the telegraph oper-ators, and until the 1880s the Telegraph Director wasthe only member of the organisation who was for-mally trained as an engineer.

Telecommunication becomesscientifically basedThus, the organisation was hardly in a position tohandle rapid and radical technological change. Fromthe late 1870s and increasingly through the 1880s and1890s, the telegraph technology changed profoundlywith the replacement of the traditional manual Morsesystem by automatic telegraphs and the introductionof multiplex telegraphy, in which two, four, and evensix telegrams could be sent simultaneously. In therapidly evolving fields of telephony and radio telegra-phy, technological change was even faster. To makethings worse, the new innovations tended to be moredirectly linked to scientific theory than previously.For instance, radio telegraphy was based on JamesClerk Maxwells theory of the electromagnetic field,and the early stages of the development of the newtechnology took place in the laboratories of physicistssuch as Heinrich Hertz, Édouard Branly and OliverLodge. Similarly, in the case of long-distance tele-phony the seminal introduction of pupinisation(1900) and the Krarup cable (1901–02), were basedon the abstract mathematical theories of Oliver Heav-iside and Aimé Vaschy, which contrary to the estab-lished view of practically oriented telegraph engi-

neers implied that the transmission quality of tele-phone cables or wires could be improved by increas-ing their self-inductance. Of course, scientific knowl-edge was only one element in the innovation process.The eventual commercial success of the wireless, forinstance, depended crucially on the efforts of British-Italian entrepreneur Guglielmo Marconi, who broughtthe new device out of the laboratories and developedit into a practical communications device. Similarly,the development of the Krarup cable was linked tothe practical implementation of the new principles inthe construction of new submarine cables connectingDenmark with Sweden and Germany.

To handle multiple technologies, each of which wererapidly changing, the NTA had to adopt a new policyon recruitment, education, and competence building.The appointment of the dynamic Jonas Severin Ras-mussen as new Telegraph Director in 1892 helpedspeed up the process. In the next few years, a numberof electrotechnical engineers educated at leading Ger-man polytechnical universities were recruited to theTelegraph Director’s central staff. Furthermore, theacquisition of the private telephone network in Oslo

Jonas Severin Rasmussen was General Director ofthe Telegraph Administration from 1892 until 1905.His background before that had been as a seniorlecturer from the Norwegian Naval College. Hisappointment helped speed up the process of adoptinga new policy on recruitment, education and compe-tence building (Photo: Norwegian Telecom Museum)

ISSN 0085-7130 © Telenor ASA 2004

Page 14: 100th Anniversary Issue: Perspectives in telecommunications

12 Telektronikk 3.2004

in 1901 meant that the country’s leading milieu oftelephone engineers was brought into the organisa-tion. At the same time, an educational reform wasinitiated. While the practical education of femaletelephone operators was decentralised to the localexchanges, the educational plan of 1898 established anew Telegraph School in the capital for the male tele-graphers and technicians. Instead of a three-monthpractical course in telegraphy, the new educationlasted for 11/2 year and included both theoreticaland practical subjects. The exclusion of females sooncame under attack, and from 1910 both sexes wereadmitted on an equal basis. At the same time, the18-month training programme was split between a‘lower’ and a ‘higher’ course.

Still, the question of how to keep the regular staffcontinually up-to-date remained unsolved. In 1888the first regular journal for telegraphers was pub-lished. Seven years later the journal changed its nameto Elekroteknisk Tidsskrift, signalling a gradual devel-opment towards a general electrotechnical journal.There can be no doubt that this journal represented animportant infrastructure for the distribution and dis-cussion of information on recent development withinthe field. However, in the heated struggle in the1890s about the future organisation of the Norwegiantelephone system, Elektroteknisk Tidsskrift stronglyfavoured private telephone companies over stateownership, frequently offering stark criticism of theTelegraph Director and the NTA. It is highly proba-ble that this made the journal less attractive to theNTA management as a suitable source of informationfor its employees. Furthermore, from the turn of thecentury the journal gradually devoted more and moreattention to the emerging electrical power business atthe expense of the coverage of telegraphy and tele-phony. The introduction of the Technical Informationseries in 1904, then, may be seen as an attempt to fillthis gap in the existing information infrastructure.

SummaryWe have traced the origin of Telektronikk to a periodof rapid and radical technological change, when thedominant Norwegian telecom operator was forcedto adopt a new and more systematic approach to thedevelopment of its knowledge base. As one initiativeamong a broader set of reforms, Technical Informa-tion was introduced to distribute up-to-date informa-tion on recent technological developments within thetelecommunications field. Right from the beginning,then, Telektronikk’s predecessor became an importantvehicle in the development of a knowledge-basedtelecom industry.

Selected referencesBrittain, J A. The Introduction of the Loading Coil:George A. Campbell and Michael I. Pupin. Technol-ogy & Culture, 11, 1970.

Kragh, H. The Krarup Cable: Invention and EarlyDevelopment. Technology and Culture, 35, 1994.

Rafto, T. Telegrafverkets historie 1855–1955. Oslo,Telegrafverket, 1955.

Rinde, H. Et telesystem vokser fram. Norsk telekom-munikasjonshistorie, bind 1, 1855–1920. Oslo,Gyldendal Fakta, January 2005. Forthcoming.

Strømberg, E. Utdanningen i Televerket 1855–1950.Oslo, Teledirektoratet, 1984.

Harald Rinde (39) is a Researcher at the Centre for Business History at the Norwegian School of Manage-

ment BI. He is a Dr.Art. (2004) from the University of Oslo with a dissertation on the organisation of national

telephone systems in Scandinavia in the 1880–1900 period, and is author of the first volume in a new three-

volume history of Norwegian telecommunications which will be published in January, 2005. Rinde has

published numerous books and articles within the field of business history and the history of technology.

He is currently working on a research project about the history of the lawyers’ profession in Norway.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 15: 100th Anniversary Issue: Perspectives in telecommunications

13

1 IntroductionThe Internet Protocol (IP) has been around for severaldecades, formally managed by IETF (Internet Engi-neering Task Force). Just 10–15 years ago the com-mercial interest for this protocol emerged from thetelecom provider’s side. Several factors took placeto support such a development: i) the volume of IPimplementations in different terminal equipment(PCs and host computers); ii) the development of webbrowser applications providing an intuitive user inter-face for accessing information; iii) email applicationstaking the step into the residential sphere; iv) gradualdeployment of the IP suite in other terminals thancomputers; v) a growing political and commercialinterest for broadband service offerings. Naturally,other factors also contributed in different regions.Overall, it seems too simple to claim that a singleevent or factor resulted in the huge interest for IP-based services. Therefore, one may wonder whetherit was the open arena for discussing IP-related workprogress provided by IETF, the range of deficienciesseen for the IP suite, or the growing commercialinterest – and potential return on effort, that inspiredthis huge effort. Possibly the actual explanation is acombination of all these as well as a few others.

Making a distinction between IP as a protocol andthe “Internet movement” is essential. Principally, IPis just a protocol defining a set of header fields andcapabilities. However, no Internet offering is seenwithout the presence of IP. Hence, IP and Internetare these days considered to go hand-in-hand. On theother hand, IP-based networks are also implementedthat do not allow for public traffic. Examples areenterprise-internal networks, e.g. within and betweenenterprise sites. Networks for management activitiesare also regularly realised by IP.

For a future-looking strategy one has to be open forthe introduction of other protocols – complementingand/or replacing the IP suite. Although during the lastyears several other candidates have been promotedfor different areas, the IP suite now seems to be theundisputable winner at the network layer. In fact, as

its sheer volume and momentum grows the challengegrows for alternative solutions to take over that posi-tion.

Objectives of this article are to place the IP deploy-ment into a commercial context and to outline trendsand potential benefits. Some technical aspects will betreated as well, as the commercial opportunities fol-low from the technical possibilities.

As several papers have described the IP history thefollowing chapter will rather briefly outline some ofthe events. The intention is to provide a basis for fur-ther descriptions in subsequent sections. Chapter 3gives an outline of the IP suite, giving a generaloverview of protocols and mechanisms related to IPand found in IP-based networks. Then, current trendsfor implementing IP-based networks are described inChapter 4, together with drivers and consequences asseen today. No perspective article would be completewithout reflections on what is next to come. This isthe topic of Chapter 5; both discussing next stepsrelated to the IP suite and issues that may ask for rad-ical changes or even replacement of IP. An efficientoperation of networks and systems requires that past,current and future(s) are linked in a smooth manner.This is treated in Chapter 6, also giving concludingremarks.

2 Take a look over your shoulder –reflecting on the past

2.1 IP key characteristics

Returning to the conception of IP, one of the maingoals was to provide a protocol that was able to trans-port the information to its destination. This was anessential requirement even during failure situations.Hence, emphasis was placed on the eventual arrivalat the destination, less on strict real-time delivery.Without entering the discussion of whether this wasoriginally intended for having a network surviving anuclear attack, a nuclear power plant accident, reach-ing vessels under various conditions, or some other

Internet Protocol – Perspectives onconsequences and benefits by introducing IP

T E R J E J E N S E N

Terje Jensen is

Senior Research

Scientist at

Telenor Research

and Development

Telektronikk 3.2004

The Internet Protocol (IP) has been subject to an overwhelming interest during the last decade

or so. One may raise the question of why this has been the case. Broadly, the answer can be that IP

represents a shift in the philosophy of telecom business besides being a fairly easy and widespread

protocol. This article makes some reflections on issues related to IP development – mostly based on

technical aspects, but also linking these to a broader business perspective.

ISSN 0085-7130 © Telenor ASA 2004

Page 16: 100th Anniversary Issue: Perspectives in telecommunications

14 Telektronikk 3.2004

cases, it seems clear that the robust transport capabil-ity was given high priority amongst the requirements.

This is also reflected by the two key characteristicsthat identify the IP format:

• Each packet carries the addresses of its destinationand source;

• Each packet can be forwarded individually, in-dependent of routes of preceding and followingpackets.

2.2 Volumes and industry events

Preparing the foundations in the late 1960s, the IPprinciples being elaborated in the early 1970s havesurvived for several decades. The more rapid growthcame after the establishment of WWW (World WideWeb) in the beginning of 1990s. The intuitive userinterface “click and get” dramatically lowered thethreshold for information collection. Emails accom-panied the usage, although such message transfercapabilities had already been proposed in the 1960s.However, other phenomena allowed such a growth,

like the relative price reduction of computers and theinformation made available by different sources.

In the late 1990s, however, it seemed as if the tele-com industry was becoming too optimistic in theirestimates of traffic volumes and revenue potentials inthis area. The so-called bubble-burst left several com-panies out of business and others going through dras-tic downscaling. This also influenced the progress ofIP-related developments. As pointed out in [RFC3869],more recent events in different parts of the worldhave led to increased insecurity and renewed interestin having dependable telecom infrastructure. Theplayers pulling through the bubble burst also seemto have more sensible expectations to the industry.It is still necessary to join forces in order to makeprogress in the IP-related areas. This is also a motiva-tion for the organisations and initiatives, such as ITU,IETF, ETSI, 3GPP, DSL forum, Infranet initiative,and so on. In later years a steady increase in the traf-fic has accompanied the users’ interest in the Internet.Figure 1 illustrates the situation regarding Internetsubscriptions and broadband accesses in Norway.

2.3 IP on-time to meet a need in service

portfolio

Considering the outset and initial motivation, onecould raise the question of how this protocol hasmanaged to maintain its position. That position ismanifested not only during the transition from a sin-gle-class (best-effort) to a number of differentiatedclasses, but also during the shift from academic andinternally controlled environments to commercialoperations with a range of players involved.

One explanation for the success of IP may be that itbelongs to a simple solution that was presented to themarket at the right time. In the early 1990s there werefew services around providing information collectionand messaging to private persons. Then, the com-bination of browser and email neatly covered thatdemand. This was accompanied by more affordablePCs and higher modem bit rates.

The commercial interest in IP came around a timewhen broadband systems were defined. As such,there was a recognized need for having a system andprotocol suite supporting these services. Several ofthe telecom providers had therefore joined forces tospecify B-ISDN (Broadband Integrated Digital Net-work). In a fairly traditional (telecom) manner onestarted out by defining services and control regimes.As an outsider to this the IP suite had a simple imple-mentation, requiring neither parallel signalling nortraffic declaration schemes. As we all know, it turnedout that the relevant applications were actually well-

Figure 1 Top – Telenor Internet subscriptionsNorway – dial-up and ADSL access (from the TelenorAnnual Report 2003); Bottom – Broadband accessgrowth in Norway (from Norwegian Post andTelecommunications Authority)

0

400

800

1200

1999 2000 2001 2002 2003

00

0s s

ub

sc

rip

tio

ns

ADSL entr.

ADSL res.

Internet dial-up

0

200

400

Ent. Cable, others

Ent. Fixed access

Ent. xDSL

Res. Other

Res. Cable netw.

Res. xDSL

00

0s s

ub

sc

rip

tio

ns

1999 2000 2001 2002 2003

ISSN 0085-7130 © Telenor ASA 2004

Page 17: 100th Anniversary Issue: Perspectives in telecommunications

15Telektronikk 3.2004

matched with the simple best-effort regime offeredby the IP suite.

Some may claim that several of the proposals relatedto IP (see following chapter) are to “repair” the initialdeficiencies. Hence, one could ask whether it wouldnot be better simply to find a replacement. Still, it is afact that the IP formats have broadly maintained theirinterpretation for several decades. In fact, this maywell be one of the causes for its popularity; it wouldbe more discouraging to have to adapt to differentversions and protocols that were developed andsteadily changed.

On the other hand, one cannot neglect all the sugges-tions that have been made related to improving cer-tain aspects of having an IP network. In retrospect, itseems nice that IP as the core protocol has been leftunchanged, and most of the suggestions might beseen as modules that could be added on to the coreprotocol. Such a modular-based philosophy supportsthe flexibility of proposing new modules replacingothers to allow for more services, more efficient net-work operation, and so forth.

3 The IP suite in a broaderperspective

3.1 IP features

The most fundamental IP service is to offer an un-reliable, best-effort, connectionless packet transportbetween a source and a destination. The service iscalled unreliable as delivery of the packet is not guaran-teed (no verification of correctness taken care of by IP).Connectionless comes from the fact that each packet istreated independently from other packets in the same

flow. Best-effort comes from the fact that no packet isassumed to be discarded or distorted on purpose.

A protocol stack is used when transferring informa-tion between two points. IP is commonly referred tothe network layer as packets can be inspected in inter-mediate routers between the sender and the receiver.Version 4 of IP is the one most applied today. Thisadds a 20 byte packet header to the information, seeFigure 2. The packet header contains sufficientdescription for the packet to arrive at its destinationand for the information flow to be reconstructed bythe receiver. Some pieces of the header also allowpackets to be favourably treated (priority field) andavoid that packets stay in the network following toolong routes (time-to-live field).

IP version 6 is also promoted by several sources. Abasic argument is that more addresses are supported(128 bits address fields compared to 32 bits for IPv4).However, an effect of this is that the header of IPv6packets has grown to 40 bytes. On the other hand, theheader supports more flexibility and enhancedoptions.

Error and control messages are incorporated at theIP layer. These allow routers and terminals/hosts toexchange operational and maintenance information.One motivation is to inform that a destination isunreachable, which could be detected by the time-to-live header field being decreased to zero.

3.2 Transport protocols

Above the IP layer, two transport protocols havefound their place; one for fast un-acknowledgedtransfer – UDP (User Datagram Protocol), the otherfor acknowledged transfer – TCP (Transmission Con-

Figure 2 Illustrating IP suite

edge

Host, X

Network interface

Application

X IP address

X port identity

e.g. X socket

Info

Transport

header

IP header

Domain ADomain B

Interdomain

MPLS

path

time

bitrate

TCP

UDP

Host, Y

Network interface

Application

Y IP address

Y port identity

e.g. Y socket

Info

Transport

header

IP header

IP

Transport

IP

Transport

edge

ISSN 0085-7130 © Telenor ASA 2004

Page 18: 100th Anniversary Issue: Perspectives in telecommunications

16 Telektronikk 3.2004

trol Protocol). The former is commonly applied fortransferring data having strict real-time requirements,where retransmission for correcting an erred ordropped packet is not relevant as that is assumed toarrive too late. Examples are transfer of real timevoice and video. TCP is widely deployed for transfersof data where correct information is requested. Hence,it has been used for file transfers, email transfers, webbrowsing, and so forth. In order to support this a con-nection between the end-points has to be establishedand acknowledgements have to be exchanged.

Due to an initial idea of IP that end-systems are incharge of the information transfer, it is a challenge tofind the appropriate transfer rate (kbit/s) for a host/application. TCP has an incorporated mechanism forestimating the proper rate to apply (slow-start andrate adjusting by increasing and decreasing factors).In principle, the inherent TCP mechanism would pur-sue the proper rate by increasing its transmission rateuntil a packet is lost (or arrives too late). Measure-ments have shown that this may result in an oscillat-ing transfer rate. However, a maximum window sizecan assign an upper limit of the information un-acknowledged and thereby a maximum transfer rate.This maximum window size is utilised in some sys-tems, e.g. for UMTS (Universal Mobile Telecommu-nication System) to avoid the oscillating behaviourof TCP.

3.3 Underlying protocols

The IP layer may utilise several underlying protocols,including PPP (Point-to-Point Protocol), MPLS(Multi-Protocol Label Switching), and Ethernet. Infact, one of the advantages of IP is its ability to makeuse of different link protocols. In the core network,MPLS appears as a current main candidate. This pro-tocol assists to separate traffic flows by offering atunnelling mechanism. Hence, MPLS tunnels canbe managed by intermediate nodes relieving the IProuting and processing capacity. Commonly MPLSis used to implement VPN (Virtual Private Network)services.

Several people have investigated relations betweenIP and high-capacity systems, e.g. optical networks.This deals with questions like:

• Architecture, e.g. should the IP layer and the opti-cal layers interact as peers exchanging informationas between equal parties, or should IP simply be anoverlaying client?

• Which framing protocol should IP use – if any – ascontainer when carried by an optical network?

• Should redundancy and protection schemes on IPand optical layers co-operate or carry out theiractions independently?

3.4 Routing and naming

Different routing protocols are often applied – with adivision between the domain-internal protocol andthe protocol used between different domains. Here, adomain may represent a set of routers (a single opera-tor may define a number of domains). A main differ-ence between the intra- and inter-domain routingprotocols is the detailed information exchanged.Between domains this information is limited not toreveal too much on the domain internal structure.In addition, limiting the information also results inless data to be exchanged and stored in the routers.Between domains, BGP (Border Gateway Protocol)provides mechanisms for aggregating routes andranges of IP address prefixes. This supports efficientconnection between networks.

There is also a distinction between routing and for-warding; the former used to describe the exchange ofrouting information (by routing protocols), and thelatter used for sending IP packets according to theroutes. When MPLS is used as tunnel, the IP forward-ing is by-passed, allowing those packets to travelalong different paths to what the routing would tell.Today, this mechanism could be used for load-bal-ancing a network.

There are different sets of addresses and identifiers ina network. The IP address gives a source or a destina-tion referring to the IP layer, see Figure 2. Then,transport protocols, e.g. TCP or UDP give a portidentity. Between a transport layer and an application,a socket identifier is commonly applied. This willthen point to the application, such as a file transferapplication. A hierarchy of applications can also beused; for example the web browser can invoke thefile transfer application. An example of identificationat the application layer is URI (Universal ResourceIdentifier). A DNS (Domain Name System) service isapplied to translate between URI and IP address, suchas from www.telenor.com to 129.134.11.24. Then,clicking on a link (described by URI) can be con-verted to an IP address (and the server) where theinformation is found.

3.5 Traffic handling

Different traffic handling mechanisms are alsodefined for IP. DiffServ (Differentiated Services) is away of defining a number of traffic classes where IPpackets in the different classes receive different prior-ities. The result is that high priority IP packets maybe forwarded before low priority packets. An argu-ment is to give flows with strict real-time require-

ISSN 0085-7130 © Telenor ASA 2004

Page 19: 100th Anniversary Issue: Perspectives in telecommunications

17Telektronikk 3.2004

ments, e.g. voice, higher priority to limit the delay,and delay variation. The mechanisms also mean thatpackets must be classified, the corresponding flowsmonitored and queueing disciplines have to be imple-mented.

To fully support ensured services, however, addi-tional mechanisms must be installed. Examples areadmission control and resource reservation. Com-monly statistical guarantee can be promoted wheresome of these mechanisms are more loosely applied.

3.6 Related protocols

The traffic handling is motivated by combining highresource utilisation at the same time as requirementsfrom applications are fulfilled. Typically, these mech-anisms are fully deployed on the edge of the IP net-work. An example is the service edge routers whereIP packets are inspected after being generated by aDSL (Digital Subscriber Line) user. PPP may forexample be utilised as security means for each user– offering a tunnel between the residential and theservice edge router. To provide an IP address andauthentication of the user equipment, RADIUS(Remote Authentication Dial-In User Service) couldbe used between the service edge router and anauthentication server. Other protocols are also pro-posed for service control and management, such asSIP (Session Initiation Protocol) and COPS (Com-mon Open Policy Service). SIP, including someenhancements, has currently much support for con-trolling services for both fixed and mobile/wirelessaccess.

A quick visit to the IETF web pages gives an impres-sion of topics that occupy those who are engaged inthat forum. However, there are also other organisa-tions working on essential aspects of running IP-based operations. Examples are requirementsdescribed by ITU and implementation and interoper-ability issues described by a number of fora such asthe DSL forum.

4 Current implementation trendsand consequences

4.1 IP central in all strategies

Every major operator is currently deploying broad-band – and narrowband – services utilising IP. Mostoperators see IP as an essential element in their net-work and system development. This should thereforebe carefully planned and implemented as part of thestrategic plans, not as an isolated activity.

Looking more closely at the operators’ strategies, wesee that a wide range of services utilising IP are to be

provided. Examples of services are TV broadcasting,video-on-demand, voice/telephony, Internet access,VPNs. There is also a clear trend that IP is introducedin all major network areas. Besides the wired/fixednetwork, bodies working on mobile systems andbroadcasting systems introduce IP to take a growingposition.

The steady growth of IP-based networks is motivatedby a number of drivers:

• New applications and user groups; these placeadditional requirements on network capabilities.One example is that low-revenue users also wantaccess to networks and information, requestingsponsored or low-cost solutions.

• New technologies; more technical solutions aredeveloped related to IP-based networks, supportingefficient configuration and operation as well assupporting more user demands and applicationtypes.

• Increased load and network expansion; increasingthe broadband offering in a region commonly asksfor more distribution of equipment. For example,when the access rate on copper lines increases,shorter lines are often used implying that equip-ment is located closer to the users.

• Higher network dependency and more providersinvolved; steadily more actors are depending theirbusinesses on available IP network services. Theup-time requirements are therefore becomingstronger. At the same time, the number of playersinvolved in some of the services is growing,requesting arrangements between the players tosort out duties and responsibilities.

This could be taken as proof that IP has matured fromsimply a best-effort solution to being able to supportbusiness-critical applications and differentiation ofservices.

4.2 Consequences accompanying IP suite

Broadly, one could recognize two “schools ofthought” driving the IP development: i) those want-ing to introduce ensured and differentiated services,and, ii) those wanting to keep a single IP transportservice class. On a generalized level, this could alsobe seen as the battle between keeping most of the ser-vice control (referring to IP transport service) withinthe network as for school i), and keeping the controlin the end-system as for school ii). In case school ii)should be the right one to follow, this may be arguedon the grounds that basic IP transport services are theonly ones asked for. That is, these services must be

ISSN 0085-7130 © Telenor ASA 2004

Page 20: 100th Anniversary Issue: Perspectives in telecommunications

18 Telektronikk 3.2004

incorporated into the portfolio – a sub-argument isthat faced with possible performance bottlenecks,more capacity should be added. Some may also claimthat this is less expensive than implementingadvanced traffic handling mechanisms. However,most providers lean towards school i). They mainlyapply the following arguments:

• Differentiation: a range of different users andapplications is to be supported. These have differ-ent requirements (such as delay, loss and availabil-ity), intuitively asking for different services.

• Cost reduction: being able to support differentiatedservices in a common physical network allows forcost savings. That is, it should be less expensive toinvest in and operate a common network support-ing a range of services, compared with havingdifferent physical networks. A principle scale argu-ment as known from traffic studies supports thisargument. Applying traffic theory also shows thatbetter resource utilisation can be achieved whenservice differentiation is implemented.

• Support on new service packages: having a com-mon IP network should alleviate developing andoffering of so-called converged services. That is,service packages crossing different traditional net-work accesses are more swiftly available with acommon IP network as basis. The cost and speedof service offering is an important argument. Inseveral of today’s incumbent operators’ systemportfolio, the level of complexity due to interactionbetween different systems seems to dramaticallyslow down the service development speed.

Summarised, two main groups of argument are drivingthe IP deployment from an operator’s perspective; thecost saving and the option to offer new service pack-ages. The former is further backed by the observationthat “IP type of equipment” has been falling faster inprice compared to others. This is expected to con-tinue; driving deployment of IP into new areas oftelecommunications. The second, new service argu-ment is also a trend expected to continue. In somerespect services currently provided by other platformsare implemented on the IP platform, e.g. voice/tele-phony. There are also more services added; on-demand, gaming, multi-party messaging. Even ifsome of these are introduced by other access forms(e.g. mobile), the network-internal implementationis commonly based on IP.

The availability of an “IP interface” also lowers thethreshold for other parties to offer services. This istherefore a driver for other providers to get access to

an IP-based interface, being able to provide the ser-vices to users in an easy manner.

From the outset, users might not care about technicalimplementations. However, the Internet and IP phi-losophy has also accompanied the wide deploymentof personal devices such as mobile handsets and com-puters, as well as other devices. Therefore end-usersalso benefit from the wide spread of IP-enableddevices. Naturally, security mechanisms must alsobe installed avoiding non-authorized intrusion andrevealed personal information.

4.3 Some technical deployment trends

In most IP core networks, MPLS is utilised as a tun-nelling mechanism. There is also a push towardsintroducing DiffServ for service priority and efficientresource utilisation. Underlying Ethernet interfacesare argued for because of cost-savings. The overallprinciple is then a common physical network allow-ing for virtual separation of traffic classes. The shiftfrom IP version 4 to version 6 has not found a defi-nite answer in Europe and USA. One cause is that theaddress space is not that restricting in these regions.There are growing interests in finding beneficial solu-tions which can explore capabilities of IP version 6 –other enhancements supported in addition to greateraddress space.

For service and session control, several internationalbodies have promoted SIP. Similar solutions havebeen promoted for future releases of mobile systems.

On the access side, alternatives to PPP are investi-gated. One argument is to arrive at functionally simi-lar solutions for all access forms, whether based oncable or air.

Incumbent operators evaluate how their network port-folio should evolve. 4–8 years ago there seemed to bea hype that every service would be based on IP fairlysoon. These days, however, a more healthy view isobserved; that only services efficiently realised on IP-based networks should be produced in such a way.This does not however pose too many obstacles forthe rapid growth of IP traffic. Still, to satisfactorilymeet the accompanying requirements, more solutionsmust be implemented – some described in the following.

5 So, what’s next?

5.1 Overall issues

As part of the strategy work, it is always reasonableto ask questions as to which trends that dominate theevolution, as well as to which phenomena are chal-lenging the current way of running the operation.

ISSN 0085-7130 © Telenor ASA 2004

Page 21: 100th Anniversary Issue: Perspectives in telecommunications

19Telektronikk 3.2004

Therefore, it is sensible to look at both the evolutionwithin the IP suite and which factors may lead toreplacements of parts of the IP suite.

Addressing the former question, work is on-going inseveral organisations. Examples are the next genera-tion networks activity in ITU and the “all-IP” targetin 3GPP, ref. Figure 3. IETF is also pondering onissues that should be researched to ensure a saneevolution of the IP suite.

Considering that so-called next generation networksare based on IP, a number of related trends and corre-sponding challenges are:

• Holistic perspective: Providers want to offer more“advanced” services on IP. This implies that a ref-erence architecture must be in place. Finding whichcomponents to apply for the different modules inthe architecture has not been completed. Protocolsand mechanisms elaborated in IETF must be puttogether in an efficient manner. As part of this area,interworking with other networks has to be solved.However, it is essential that the IP network doesnot import the complexity such interworking couldlead to.

• Separation, or decoupling, of control and bearer/transport functions: This allows for defining ser-vice platforms to some extent independent of theunderlying IP network. A clear interface definitionhas to be in place, though. The decoupling ofbearer and service control supports a modular-inspired architecture. One of the service platformcandidates is a solution developed by 3GPP; IMS

(IP Multi-media Subsystem). This solution hasbeen promoted by several organisations and opera-tors to potentially offer services on most accesstypes, both fixed and wireless. This requires thatrequested functionality is supported by the IP-based network.

• Offer a wider range of services and applications:The types of applications assumed to be supportedby an IP-based network are steadily growing wider.Real-time and business-critical applications areplaced on the network in addition to web browsing,email and file transfers. This intensifies the depend-ability, performance and security aspects. Featuressuch as multicast, mobility, unified access forusers, community networking, and so forth, mustbe supported at IP layer for efficient realisation.

• The end-to-end perspective: Considering the appli-cations, an IP network is utilised for more demand-ing purposes. For commercial operations, the usersrequire ensured service levels to justify the pay-ments involved. For example, breaking a real-timevideo presentation is not acceptable when it is paidfor. Hence, mechanisms for ensured service deliv-ery is required. This implies invocation of trafficand QoS management.

5.2 Some technical issues

On the more technical level, several issues have beenidentified, e.g. ref. [RFC3869], [ITU_IP]. All of thesehave relations with the four areas i) overall architec-ture/functionality, ii) traffic/performance, iii) secu-rity, and, iv) network evolution. The efficiency and

Figure 3 Samples of some topics and mechanisms to be addressed during further work on IP-related development

Holistic, overall

reference architecture

End-to-end

perspective

More types of services,

applications, users

Modularity,

decoupling of layers

Efficient and swift

service roll-out

Inter-operable,

inter-connection

Naming

Routing

Network

management

Quality of Service

mechanisms Traffic and resource

handling mechanisms

Applications

Service control

Security

mechanisms

Transport

protocols

Cross-layer

interactions

Mobility/portability

support

Monitoring

ISSN 0085-7130 © Telenor ASA 2004

Page 22: 100th Anniversary Issue: Perspectives in telecommunications

20 Telektronikk 3.2004

scalability must also be taken into account. Some ofthe issues to address are:

• Naming: allocation of IP addresses, domain names,and others (e.g. ENUM, Electronic Numbering),has to be co-ordinated on a global level. Discus-sions are on-going including ITU, IETF and others.A transition from version 4 to version 6 of IP isalso considered.

• Routing: Protocols for routing may need to beenhanced or replaced considering the new needsemerging from commercial operations. Examplesare more routing metrics including business mat-ters, faster routing convergence to recover fromunexpected events, interconnections betweendomains, multi-homing of devices, and mobile/ad-hoc environments.

• Network management: The growing traffic andusers require that a (logical) network perspectivemust be supported for efficient management. Oth-erwise, the complexity cost would likely outgrowthe scale gain. This also includes monitoring andmeasurement arrangements. How and what tomeasure are essential questions – balancing theaccuracy and scalability aspects. Managementand control of customer equipment would alsobe requested allowing end-users to be relieved ofintricate configurations. Management of any inter-mediate system (sometimes referred to as middle-boxes) should also be supported. Autonomousprocedures would likely be efficient in this manner,naturally also asking for performance and securityaspects to be solved.

• Quality of service: How the different mechanismsactually influence the real end-user experience isnot understood. The user sees the effect of the totalsystem, asking for understanding of the overallarchitecture, likely involving several domains andactors and different combinations of equipmentconfigurations (such as queueing disciplines).

• Efficient resource utilisation: An example is con-gestion control mechanisms. In-built features ofTCP have been utilised to avoid congestion. Newapplications and new environments (e.g. wireless)demand other control mechanisms. A new transportprotocol is under development, DCCP (DatagramCongestion Control Protocol). Besides such a pro-tocol, information exchanges across layers couldalso be utilised. For example, intermediate IProuters could signal load levels to the end-systems.

• Application capabilities: Higher requirements areplaced on intuitive user interfaces. Depending on

the services provided by the IP network, easilycomprehended feedback to users can also berequired. An example is to inform a user that aload situation somewhere (within the user network,access network, server side, etc.) results in adegraded service level.

5.3 Challenging IP’s position?

Then, one may ask whether there are any candidatesto replace IP. It is a broad opinion these days that nostrong candidate exists. On the other hand, a numberof issues may motivate for not using IP in all circum-stances:

• Lack of holistic overall architecture. If such anarchitecture is not defined, solutions will likelylead to inefficient configurations.

• The overhead added by the IP protocol suite. Incase the protocol features are not utilised, lowcapacity links do not favour this overhead.

• No coherent mechanisms for ensuring services.The strictest requirements place rather toughdemands on the network solution, where other net-works currently may meet these demands in a moreefficient way.

• Security matters may inspire for increased use of“hidden” identifiers. That is to avoid that attacksand information not asked for can be distributed.

... and, in principle, any other solution that allows formore cost-efficient operation offering timely services.The combination of inexpensive solutions and easyinstallation has an urge of deciding the winning tech-nologies. The sheer volume of IP-based devices andapplications, however, make if difficult to believethat any alternative would gain significant position inthe medium term. Naturally, increased traffic volume,simple transport network functions and treatment ofaggregated traffic mean that the IP level may not beinspected by all intermediate nodes.

6 Present questions – bridging pastand future

6.1 Immediate challenges

Based on the current status for commercial operationof IP-based networks, there are several challenges toface in the short term in order to prepare for thefuture. These challenges are mostly formulated froma network operator’s perspective. In broad terms theyare grouped into:

ISSN 0085-7130 © Telenor ASA 2004

Page 23: 100th Anniversary Issue: Perspectives in telecommunications

21Telektronikk 3.2004

Dr. Terje Jensen (42) is Senior Research Scientist at Telenor Research and Development. In recent years he

has mostly been engaged in network strategy studies addressing the overall network portfolio of an opera-

tor. Besides these activities he has been involved in internal and international projects in various areas,

including network planning, performance modelling/analyses and dimensioning.

[email protected]

• Business concerns: Although perhaps provocative,it is fair to say that few of the major operators havereally incorporated processes and systems for swiftoffering of IP-based services. Several of the new-comers seem to be much more streamlined, though.These processes refer to every phase from definingcustomer-related systems, deciding on pricingstructures, allowing bundled services, supportingself-service variants, and so forth.

• Service concerns: Communicating what service isactually provided seems to be challenging. SLA(Service Level Agreement) could be elaboratedcorrespondingly. This also asks for monitoringapparatus documenting that SLA conditions aremet. Offering more value-added services is alsoexpected to be on the operator’s wish list. Then, itis necessary to integrate service control and man-agement. A particular aspect is that several systemsand players may contribute to complete the servicedelivery. Additional mechanisms are also requiredto allow this to take place in a controlled manner.

• IP transport concerns: Interplay between differentsystems, within and partly outside an operator’sdomain requires (standardised) protocols. In someareas these are in place, such as routing betweendomains. For interacting with customer equipment,several options are currently proposed. ConfiguringIP resources is also a challenge on the same level,in particular in view of multiple services assumedto be supported. Monitoring is a challenge on theIP network layer. It is not much of a problem defin-ing a monitoring scheme collecting huge amountsof data. But to define a scheme, which captures thetraffic and performance in an efficient way, is stillan unresolved issue. Defining and tuning traffichandling mechanisms is also a necessity to deliverensured services.

6.2 … and, in summary

The steady growing presence of IP in almost all tele-com operations is evident. No matter which accessnetworks and services that are looked at, IP-basedimplementations are seen in the current or futureroadmaps. This IPfication is an expression of thepacket-oriented implementations of telecom services.

No operator/provider should neglect this trend andproper steps should therefore be prepared in a timelymanner. A basic argument for an incumbent operatoris that these steps should also assist when reducingits internal complexity, which likely has grown forseveral years. That is, the IPfication allows a drasticcleansing of an operator’s system portfolio. Thisimplies reduced cost base and more swiftly serviceofferings. The whole telecom industry is currentlyvery occupied with finding means to lower costs andallow for enhanced service portfolio. Society as awhole would also benefit from the lower cost baseand improved service offerings. In fact, the technicalsolutions fuel the so-called global information andknowledge-based society as defined by UnitedNation’s bodies.

Major challenges rest within areas of defining anoverall reference architecture, traffic and resourcehandling, security and efficient evolution schemes.Target reference architecture allows for describingmodules that should be in place to support serviceofferings. The modular architecture corresponds withtrends of introducing open interfaces and decouplingdifferent layers. This is also a competitive edge con-sidering the growing heterogeneity and dynamicsforeseen for service environments.

In some respect the IPfication could be considered asInternet philosophy brought into a commercial con-text. Here, a principle of “good enough” seems to bedominating. That is, not too much, neither too little.What is then “good enough” in a more specific inter-pretation? Well, that is a multi-billion question con-tinually examined by most players involved.

References[ITU-IP] International Telecommunication Union.ITU and its Activities Related to Internet-Protocol(IP) Networks, ver. 1.1, April 2004. Work inprogress. [online] – URL: www.itu.int

[RFC3869] Atkinson, R, Floyd, S. IAB Concerns andRecommendations Regarding Internet Research andEvolution. IETF Request for Comments 3869. August2004. [online] – URL: www.ietf.org

ISSN 0085-7130 © Telenor ASA 2004

Page 24: 100th Anniversary Issue: Perspectives in telecommunications

22 Telektronikk 3.2004

An example of this is Telenor’s “Electronic Shep-herd” project [1] in the Lyngen Alps (Figure 1). Here,far above the Arctic Circle, the goal is to assist thetraditional practice of Sami nomadic herding withnomadic data. Driven by changes in habitat and landuse, aims include tracking the animals and theirpredators, monitoring their health, bringing themdown when needed, and providing continuity ofinformation throughout the animal products supplychain. Components of the system include short-rangewireless tags for sheep or reindeer, GPS “bell” tagsfor the lead animals, and tag readers at salt licks inter-faced to a multi-hop 802.11 network.

While this activity is on the current frontier ofcommunity-scale communications, it is also lookingbeyond access, and deployment, to an even greaterpossibility: the local development and production ofadvanced telecommunications technologies. It is nowpossible in the lab to print three-dimensional struc-tures, displays, sensors, actuators, and logic [2].Putting all of these capabilities into a single printerpromises to create one machine thay could make anymachine, including itself. The relationship betweensuch a personal fabricator and today’s industrialmachine tools is analogous to that between a personalcomputer and a mainframe, bringing the programma-bility of the digital world to the rest of the world.

Personal communication fabrication in the Lyngen Alps

N E I L G E R S H E N F E L D A N D M A N U P R A K A S H

Neil Gershenfeld

is the Director of

MIT’s Center for

Bits and Atoms

Manu Prakash is

a Master’s Can-

didate at MIT

and a research

assistant at

MIT’s Center for

Bits and Atoms

The 100th anniversary of Telektronikk is an auspicious time to look back at Telenor’s pioneering role

in providing access to telecommunications, and look ahead to the possibility of expanding the

availability of not just communication technologies but also the means for their development.

An early goal of telecommunications was universal service. Access was seen as an essential enabling

tool for participation in a modern economy, and codified in legislation such as the 1934 Communica-

tions Act in the US. This was followed by activities that have sought to provide access not just within

but across economies, a vision that has been pioneered by Telenor’s support for Grameen Phone in

Bangladesh. This provides communications via village-scale micro-businesses selling time on a

mobile phone, showing that a self-sustaining business model can bring advanced technologies to

where they are needed in some of the least-developed places in the world. Now, users are beginning

to participate in deploying their own telecommunications infrastructure via mesh wireless networks.

Commodity 802.11 access points and radios can be modified to serve as low-cost routers, and high-

gain antennas can extend their range to tens of kilometers. Such a system still requires a conventional

carrier for the long-haul links, but it does expand not just access but also deployment of the local loop

when the economics might not otherwise be favorable (for either the user or carrier).

Figure 1 The Electronic Shepherd project

ISSN 0085-7130 © Telenor ASA 2004

Page 25: 100th Anniversary Issue: Perspectives in telecommunications

23Telektronikk 3.2004

While this is a long-term research goal requiringmany years of work to come on the enabling materi-als and mechanisms, it is possible today to approxi-mate the functionality of a personal fabricator witha small set of parts and tools. Two things make thispossible. First is the intersection of consumer elec-tronics with machine tools. The metrology that allowsa CD player to position a read head to a micron cando the same for a cutting tool. Inexpensive table-topNC milling machines can now provide tenth-mil(0.0001”) positioning resolution and use few-miltooling, making it possible to fabricate features downto micron scales. Applications include subtractivelymachining surface-mount circuit boards, along withmaking associated electromagnetics, packaging, andinterfaces. And second, a $1 microcontroller can havea clock cycle shorter than 1 us, which is fast enoughfor microcode implementation of communicationmodulation, instrumentation signal chains, and videodrivers. Together, the accessibility of microns andmicroseconds means that with a few thousand dollarsin capital equipment and a few dollars in componentsit is possible to design and build complete communi-cating systems.

These capabilities are being deployed by MIT’sCenter for Bits and Atoms in field “fab labs” in loca-tions ranging from inner-city Boston to rural India,to coastal Ghana, to the Lyngen Alps. The goal of thefab lab project is to bring into the field today the toolsthat will eventually be integrated in tomorrow’s per-sonal fabricators, in order to understand why and howthey will be used. The initial embodiment uses acollection of commercially-available materials andmachines, connected by software developed to imple-ment engineering workflows. Instead of a formalcurriculum, the educational model is “just-in-time”,based on project-based peer-to-peer training. Instruc-tional materials and project documentation are gener-ated by the users as they learn, and shared through acollaboratively-edited Web resource [3]. This can bethought of as open-source approach to hardware aswell as software.

As an example, high-gain 2.4 GHz 802.11 antennasare essential for the Electronic Shepherd project tobring data down from the mountains. Because of therough terrain many antennas are needed to routearound obstructions, representing a significant costat commercial prices. Furthermore, the links vary intheir need for gain vs coverage, requiring a range ofantenna types. Hence one of the first fab lab projectsthere is developing antennas that can be locally pro-duced and characterized.

The design is based on micropatch arrays (Figure 2)[4–6]. Their development requires sophisticated elec-

Figure 2 Edge-fed 5-patch and series-fed 13-patch arrays

Figure 3 Calculated normalized radiation patterns for 1, 5, and 13patch series-fed arrays

Table 1 Calculated dependence of the radiation divergence angle,directivity, and gain on the number of patches

Figure 4 Plotting a micropatch array

Angle (deg) Directivity (dB) Gain (dB)

1 patch 175.68 6.12 6.12

5 patch 69.45 10.15 10.14

13 patch 25.17 14.56 13.62

ISSN 0085-7130 © Telenor ASA 2004

Page 26: 100th Anniversary Issue: Perspectives in telecommunications

24 Telektronikk 3.2004

tromagnetic modeling, and they must be fabricatedwith tolerances that are a small fraction of the wave-length (sub-mm for 12.5 cm 2.4 GHz radiation), butthey can then be made out of inexpensive copper rollstock, and the antenna gain is easily chosen by vary-ing the number of patches (Figure 3). For largerarrays a series-fed design was used to minimizeresistive losses from fanout in the impedance-match-ing feed network, and the antenna performance wasmodeled with the method of moments (Table 1).

The patch arrays and their feed lines are cut outwith an inexpensive computer-controlled sign cutter(Figure 4) [7], using an application that was devel-oped for toolpath generation across fab lab processes(Figure 5).

Copper-clad low-loss RF substrates are expensive,but with this process the materials choice is greatly

Figure 5 Toolpath generation for micropatch cutting and PCB milling

simplified because the antenna can be applied to asubstrate with a transfer adhesive (Figure 6). Wechose window glass as an inexpensive easily-avail-able material with acceptable performance.

For a candidate substrate material it is necessary tomeasure the dielectric constant to size the patches,and the parallel resistance to evaluate the loss. Theseparameters can be determined from the complex fre-quency response, or in the time domain from the stepresponse to an applied voltage. The step responses aremeasured by an inexpensive microcontroller with sin-gle-cycle RISC instructions [8]. This has a relativelyslow A/D (~10 kHz) but a fast sample-and-holdamplifier (<1 us), which can be used to undersamplethe step response by iteratively applying a chargingvoltage, waiting a variable number of clock cycles,sampling, and converting (Figure 7). The capacitanceand the series and parallel resistances can be foundfrom the initial value, initial slope, and asymptote ofthese curves, and the measurement parasitics deter-mined and removed by varying the electrode size.

The circuit board is made in the fab lab by subtractivemachining rather than wet etching, to avoid chemicalwaste and minimize consumables. A table-top millingmachine with sub-mil metrology [9] is used with a1/64” four-flute short-shank center-cutting end mill[10], to make features within a 0.050” surface-mountpitch (Figure 8). Toolpaths are generated with thesame application used for the micropatches (Figure 5).

Figure 6 Assembled micropatch array and feed line

ISSN 0085-7130 © Telenor ASA 2004

Page 27: 100th Anniversary Issue: Perspectives in telecommunications

25Telektronikk 3.2004

The reflected power S11 for a 5-patch array is plottedin Figure 9, showing the primary antenna resonance.The materials cost for this antenna is ~$1; a corre-sponding commercial antenna costs ~$100–200. Thesubstrate characterization instrumentation uses ~$2 inparts, and the equipment to produce everything costs~$6k. The incremental fabrication cost per antennacould approach the materials cost with roll-to-rollstamping, but the real value of this production pro-cess is its flexibility and locality rather than itsthroughput. Each antenna made can be optimized forthe link where it will be used, and a similar workflowcan be used to develop and produce other parts of thesystem, including the animal radios and packaging,and embedded network nodes.

Over time, the goal of the fab lab project is to replacethe commercial components with tools that can bemade in the lab, until eventually the labs themselvesare self-reproducing. The current capabilities of theLyngen fab lab can be thought of as being compara-ble to the minicomputer era in the evolution of com-putation. Their cost and complexity are on a scaleappropriate for use by a workgroup, but like mini-computers that already makes them accessible fordemonstrating and developing applications drivenby the needs of individuals instead of industries.Rather than being on the remote receiving end oftechnologies developed elsewhere, Sami herders inthe Lyngen Alps and their counterparts around theworld can create local solutions to their local needs,leading a global revolution in personal fabrication.

AcknowledgementsThis project grew out of collaboration with HaakonKarlsen Jr. from Solvik Gård, and Bjørn Thorstensenand Tore Syversen from Telenor. Preliminary antennadesigns were developed by Matt Reynolds and RehmiPost. This work was supported by the Center for Bitsand Atoms (NSF CCR-0122419).

References1 Thorstensen, B et al. Electronic Shepherd – a

Low-Cost, Low-Bandwidth, Wireless NetworkSystem. In: Proceedings of the 2nd InternationalConference on Mobile Systems, Applications andServices – MobiSys, Boston, MA, 2004, 245–255.

2 Ridley, B, Nivi, B, Jacobson, J. All-InorganicField Effect Transistors Fabricated by Printing.Science, 286 (5440), 746–749, 1999.

3 The Center for Bits and Atoms. August 30, 2004[online] – URL: http://fab.cba.mit.edu

Figure 8 Subtractive milling of the impulse response instrumentation

Figure 9 Reflected power for a 5 patch array

Figure 7 Step response measurement of the substrate dielectric constant

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 50 100 150 200 250 300

ste

p r

esp

on

se

time (µsec)

10

20

30

40

50

60

0 5 10 15 20 25

ca

pa

cit

an

ce

(p

F)

10

20

30

40

50

60

0 5 10 15 20 25

ca

pa

cit

an

ce

(p

F)

electrode area (cm2)

1 cm2

4 cm2

9 cm2

16 cm2

25 cm2

0.1

ISSN 0085-7130 © Telenor ASA 2004

Page 28: 100th Anniversary Issue: Perspectives in telecommunications

26 Telektronikk 3.2004

4 Legay, H, Shafai, L. Planar resonant series-fedarrays. IEE Proceedings Microwaves, Antennas &Propagation, 144 (2), 67–72, 1997.

5 Balanis, C A. Antenna Theory: Analysis andDesign, 2nd ed. New York, Wiley, 1997.

6 Pozar, D M, Schaubert, D H. Microstrip Anten-nas: The Analysis and Design of MicrostripAntennas and Arrays. New York, Wiley-IEEEComputer Society Press, 1995.

7 Roland CX24, CAMM 1 desktop vinyl cutter.October 19, 2004 [online] – URL:http://www.rolanddg.com/products/cx2412.html

8 Atmel AVR ATtiny15L. October 19, 2004 [online]– URL: http://www.atmel.com/dyn/general/tech_doc.asp?doc_id=7222

9 Modela MDX20. October 19, 2004 [online] –URL:http://www.rolanddg.com/products/mdx20a.html

10 SGS Tools product number #30101. October 19,2004 [online] – URL: http://www.sgstool.com

Prof. Neil Gershenfeld is the Director of MIT’s Center for Bits and Atoms. His unique laboratory investigates

the relationship between the content of information and its physical representation, from molecular quan-

tum computers to virtuosic musical instruments. Technology from his lab has been seen and used in settings

including New York’s Museum of Modern Art and rural Indian villages, the White House/Smithsonian Millen-

nium celebration and automobile safety systems, Las Vegas shows and Sami reindeer herds. He is the

author of numerous technical publications, patents, and books including “When Things Start To Think”,

“The Nature of Mathematical Modeling”, and “The Physics of Information Technology”, and has been

featured in media such as The New York Times, The Economist, CNN, and the McNeil/Lehrer News Hour.

Dr. Gershenfeld has a BA in Physics with High Honors from Swarthmore College, a Ph.D. from Cornell Uni-

versity, was a Junior Fellow of the Harvard University Society of Fellows, and a member of the research staff

at Bell Labs.

email: [email protected]

Manu Prakash (24) is currently a Master’s Candidate at Massachusetts Institute of Technology and a

research assistant at Center for Bits and Atoms at MIT. Manu is passionate about technological advances

for developing countries. He believes the hardest technological challenges lie at the most remote places in

the world. Manu is an active member of several organizations including Association for India’s Development,

Thinkcycle and Design that Matters. Before joining MIT, Manu started an educational program in Indian

schools; to instill the passion of tinkering with technology; titled “Build Robots Create Science”. He was

recently awarded the Ideas Prize at MIT on his work on Battery Vending Machines.

Manu’s research interests span from physics of computing, microfabrication techniques and RF design.

He holds a Bachelor’s Degree in Computer Science and Engineering from Indian Institute of Technology,

Kanpur.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 29: 100th Anniversary Issue: Perspectives in telecommunications

27

Many foresee a future where we will have a “digital”existence in cyberspace that will be an integrated partof our private, social and professional life. At thesame time, we will be surrounded by intelligent net-worked devices that will ease and support our living.Even if this future scenario does not become true inits full extent, it is beyond doubt that our personal,social and economic welfare is and will becomeincreasingly more dependent on information andcommunication technology. However, can we trustour welfare to this technology? What means may betaken to ensure the trustworthiness of services deliv-ered and at what cost?

1 IntroductionIt is beyond the ambition of this paper to give com-prehensive answers to the questions outlined above orto discuss the issue to its full extent as illustrated inFigure 1. I will focus on the system issues and thecharacteristics of the service provided and not go intodetail, neither on the dependability of the hardware,software or “humanware” components that go into

the system, nor into the economic, social and“human” impact of failures. What will be done is tooutline what are the threats to the trustworthiness,what will be the challenges in future systems and todiscuss the shift in technical solutions. The paperconsists of two parts, which may be read separately.The first, general part in Chapter 2 will discuss theongoing evolution of networks and telecommunica-tion services with respect to technology, provisionand use. History has shown that dependability mustbe built into the systems and cannot with success beadded afterwards. The current “ruling” technologies,the Internet and SPC telephony had this in mind. Dowe today make the correct choices for the next gener-ation networks and service provision? A number ofchallenges are posed by the increasing technologicaland operational complexity, the approach towardambient intelligence, as well as the integration of ser-vices with widely differing requirements in the samesystem. The second part of the paper, i.e. Chapter 3,focuses on technological issues in the core transportnetwork and ‘centralized’ service provision. Thedependability of these parts of the network is of great

Perspectives on the dependability of networksand services

B J A R N E E . H E L V I K

Bjarne E. Helvik

is Professor at

the Norwegian

University of

Science and

Technology

Telektronikk 3.2004

We are likely to meet a future where we rely on an increasingly wider range of information and

communication services in our private, social and professional life. In addition to what we are familiar

with today, some services will be ”invisible” and provided by an ambient intelligent network. A part

of the services will not be critical, while a continuous functioning of others will be mandatory for our

productivity and well-being. All of these services are intended to be provided by one integrated

communication infrastructure. This requires that attention is paid to availability, continuity of service,

safety, i.e. dependability issues, in its design and operation. The objective of this paper is to identify

some challenges and indicate tentative developments to cope with these dependability issues. One

major class of challenges is posed by the steadily increasing complexity and heterogeneity with

respect to services as well as service requirements, the technical installations, and the broad specter

of parties providing these services. A foreseen development to deal with this is towards autonomy in

(re)configuration, interaction between network entities and fault management. Another major class of

challenges is posed by the necessity to have a stable core transport network and service provision.

Within these areas, a change of technological generation has started combined with a merge between

”Internet technologies” and ”traditional telecom technologies”. These developments are outlined and

discussed.

Figure 1 The span of issues related to dependability is wide. The paper concentrates on the system and servicelevels, having the failures of HW, SW and “human” components in mind as well as the requirements of andimplications on the users and society

Component System Service User Societal

ISSN 0085-7130 © Telenor ASA 2004

Page 30: 100th Anniversary Issue: Perspectives in telecommunications

28 Telektronikk 3.2004

importance since the consequences of outages arelarge. A high dependability (or synonymously robust-ness, resilience, survivability or fault-tolerance) is inthe current systems typically ensured by a high usageof dedicated spare resources and special designs. Wehave a trend toward more dynamic and flexible useof spare capacity, less special design, and at least forthe transport network, a less complicated redundancystructure and fault management. Can we trust thesesolutions in the same way as the proven technolo-gies? Before we come to these issues, it is necessaryto introduce some concepts and give some back-ground.

1.1 What is dependability?

In the above section the term trustworthiness is usedto describe the property we are seeking. Dependabil-ity is a term which is adopted in standardization andelsewhere in order to define all aspects of a systemrelated to this property, cf. for instance [15, 10].Dependability may be defined as the trustworthinessof a system, in the sense that we (justifiably) may relyon the services it delivers to its users. A user may beanother technical or human system, as well as an enduser which interacts with the former, i.e. definitionwise, the concept of system and service is used widerthan indicated in the levels of Figure 1. For a more

Figure 2 Dependability ontology based on concepts and terms from IFIP WG10.4

Dependability

Threats

Faults

Physical faultsPermanent

TransientDesign faults

Interaction faults

Environment

Errors

Detection

Multiplicity

Failures Symptoms

Severities

Attributes

Availability

Instantaneous and asymptotic

(un)availability

Interval (un)availability

Reliability

Reliability function,

MTFF

MTTF

MTBF - Failure intensity

Maintainability Mean active repair time (MART)

Mean automatic service restoration timeSafety

Safety function

Catastrophic failure intensity

Confidentiality

Integrity

Means

Fault prevention

Quality assurance

Derating

Conservative design

Formal methods

Fault tolerance

Error detection

Recovery

Fault removal

Verification

Diagnosis

Correction

Fault forecasting

Ordinal evaluation

Probabilistic evaluation

ISSN 0085-7130 © Telenor ASA 2004

Page 31: 100th Anniversary Issue: Perspectives in telecommunications

29Telektronikk 3.2004

comprehensive definition and introduction to depend-ability concepts, it is referred to [3].

A top level ontology of the dependability area isshown in Figure 2. From this figure we see one of thereasons for introducing a new term, the common termreliability has also been given specific technicalmeanings, the ability to provide uninterrupted serviceand the probability of doing so for a specific period.Hence, the new term to avoid ambiguity. Dependabil-ity consists of:

• Threats or impairments to dependability: faults,errors and failures, as well as their causes, conse-quences and characteristics. These may arise fromphysical imperfections in the system, physicalinfluence and damage from the outside, human-logical faults made during specification, design,development, installation and operation, etc.Threats may also be intentional and hostile. Thelatter kind is commonly regarded as security issues.

As illustrated in Figure 3, there is an overlapbetween the random and intentional threats, and itmay be advantageous to have a unified view of thetrustworthiness. However, in this paper just the ran-dom threats will regarded, cf. Section 1.2 below.

• Means to achieve dependability are: 1) avoidingthat faults occur or are introduced into the system,2) finding and removing faults from the system,and 3) tolerating faults in the system by preventingthem from resulting in failures. Means like testingand evaluation by modelling and mathematicalanalysis or simulation, are also needed for the cost-efficient design and dimensioning of systems withrespect to their required attributes and to validateand reach confidence in the result. See also Figures2 and 3.

• Attributes of a system, which characterise its prop-erties with respect to dependability. The attributesmay be regarded as the following “abilities”. Avail-

Figure 3 Trustworthiness subject to random and intentional threats

Adequately

trustworthy

services

MeansArchitecture

Mechanisms

Protocols

Redundancy

Deployment &

operational proc.

Telecommunication

network

and

service provision

Equipment

failures

Transmission

failures

Design

faults

Operational

failures

Sabotage

Intrusions

Random

Intentional

Requirements

Environment

Eavesdropping

ISSN 0085-7130 © Telenor ASA 2004

Page 32: 100th Anniversary Issue: Perspectives in telecommunications

30 Telektronikk 3.2004

ability: to provide a set of services at a giveninstant of time or at any instant within a giventime interval. Reliability: to provide uninterruptedservice. Safety: to provide service without theoccurrence of catastrophic failures. Maintainability:the ability of the system to support fault removaland service restoration, as well as undergo change.Confidentiality: to prevent unauthorized access toand/or handling of information. Integrity: theabsence of improper alterations of information andsystem state. The attributes are characterized bymeasures, some included in Figure 2.

It is seen that the quantification of all dependabilityattributes, i.e. the measures, except for maintainabil-ity, are significant QoS parameters for describingthe services provided by the system, see for instance[12]. Some of the characteristics are also related tosecurity, which deals with intentional threats to sys-tems trustworthiness, cf. Figure 3. Dependability isalso strongly related to the performance and traffichandling abilities of a system. A system cannot besaid to be available or work reliably unless the servicesare provided with a sufficient capacity, and haveresponse time and delay characteristics as required.

1.2 The real threats to dependability

The dependability (trustworthiness) of a systemshould be envisaged similar to the common conceptof the trustworthiness of a person. It is whether theperson delivers services as agreed/expected and oth-erwise behave according to the common norms thatmatters, not the reason why they do or do not, andideally, not related to how the person is built or whatthey look like. The conception of dependability (reli-ability) in a technological context has, however, beencoloured by its use when it emerged as a technicaldiscipline in the 1950s. At that time the focus wason mechanical and relatively small, simple hardwaredominated information and communications systems.At that time, permanent physical faults were the dom-inant cause of failure.

This is no longer the case. Generally speaking, themajor cause of lack of dependability in today’s infor-mation and communication systems are faults due tohuman error. Logical failures are made during speci-fication, design, and implementation, some of whichprevail when systems are put into operation. Theseare often perceived as ‘software faults’. However,with the increasing hardware and overall system com-plexity there is an increasing number of logical faults

in hard- and firmware as well. In addition to the logi-cal faults, there may be glitches in the adaptation of a(sub)system to its co-operating (sub)systems and theenvironment. Mistakes are made during configura-tion, operation and maintenance. However, the pre-conception that “only” hardware failures reduce sys-tem dependability led for a long time to less aware-ness of these threats to system dependability. Human-introduced faults in systems has not got the properawareness as system complexity has grown and hasin some cases been “accepted”.

Published fault and failure data are scarce1), so it isdifficult to be precise concerning the relative impactof the various kinds of causes. The relative impactwill of course also vary depending on the kind ofsystem and its design, e.g. fault-tolerant or not. Thereader is referred to [14, 21, 37, 31] for a roughoverview. To summarize into a few simple state-ments:

• Equal attention should be paid to faults stemmingfrom- physical causes,- inadequacies in specification, design and imple-

mentation, and- operation and maintenance.

• The technology to deal with physical hardwarefaults are better developed. There are fewer suchelement failures and the systems designed to toler-ate them are in most cases able to do so. However,rectifying failures due to these faults requires man-ual intervention, which results in longer downtimes, and the contribution to system unavailabilitymay become significant.

• Logical faults in the software (and hardware) is asignificant contributor to the failure intensity ofsystem elements. Fortunately the failures typicallyhave a limited effect. Rather few of them lead tomajor system outages, but they have the potentialto do so, for instance by being a common causethroughout a system and by error propagation.

• Larger outages are typically caused by destructionof the physical infrastructure by nature andhumans, or by operation and maintenance mistakes.

• It seems as operation and maintenance relatedfaults have a large and increasing importance.

1) Operational failure data are costly to collect. Furthermore, organizations tend to keep them for internal use and restrict theirpublic availability. (The Norwegian university network operator, UNINETT, forms an exception as they make part of the operationalstatistics public, see http://drift.uninett.no/downs/.) However, some PTTs require that QoS related failure statistics and/or significantincidents are reported, see for instance [21].

ISSN 0085-7130 © Telenor ASA 2004

Page 33: 100th Anniversary Issue: Perspectives in telecommunications

31Telektronikk 3.2004

In conclusion, the real threat to the dependability oftoday’s and future systems is their unmanageable log-ical complexity yielding design and O&M faults. Animportant reminder is also what is pointed out in[21]; overload may be a major contributor to lack ofdependability.

1.3 How to deal with the threats

There is a lot of confusion concerning the conceptsof fault, error and failure, since in daily language theterms are used interchangeably and mean that some-thing does not work, is incorrect, etc. However, independability engineering they denote separatephases from cause to consequence as illustrated inFigure 4. Starting with the consequence:

• Failure2): delivery of incorrect service or the tran-sition from correct service delivery to incorrect.A failure is a manifestation of an error that weobserve on the outside of a system.

• Error3): a system state that is liable to lead to fail-ure, i.e. the manifestation of a fault within a system.

• Fault4): the cause of an error. Note that faults mayhave a multitude of physical and human causes andbe of various kinds as illustrated to the right inFigure 5.

In composite systems, where one part of the systemuses services from other parts, the failure of one partmay constitute a fault in another. This is discussedtogether with other conceptual issues related to thefailure process in [3, 10].

The basic means to obtain dependable systems arefault prevention and fault tolerance. These may beseen as barriers in the cause consequence chain asillustrated in Figure 5. The fault prevention tech-niques aim at avoiding the causes of failures, thefaults, to enter or occur in the system, e.g. by ex-tensive quality assurance of the software, benignoperating conditions for the hardware, etc. In thefault tolerant approach, the existence of faults isaccepted. However, when they result in errors, thesystem is given a structure and mechanisms to pre-vent the errors from manifesting themselves as fail-ures. A number of mature and well known techniquesexist, e.g. FEC (forward error correcting codes) orCRC (cyclic redundancy check) combined withretransmission of erred packets to tolerate transienttransmission faults. In the transport network there are

2) In Norwegian: Feilytring3) In Norwegian: Feiltilstand4) In Norwegian: Feilårsak

Figure 4 Relationship between faults, errors and failures

Failure

UserSystem

Fault Error

1001100110

1001101110

techniques for rerouting of traffic when network linksand nodes fail. We will return to techniques in Sec-tion 3.1. Fault-tolerance in the computing functional-ity is achieved by replication. Various options will bediscussed in Section 3.2. Note, however, that we onlyhave generally applicable and mature fault tolerancetechniques to deal with physical faults and that forfaults caused by humans, we have primarily to relyon prevention.

For the completeness of the discussion a third barrieris introduced in Figure 5: contingency planning anddisaster recovery; i.e. plans made and actions taken inorder to limit or reduce the consequences of failures.Such plans are commonly made by companies havinga business critical information and communicationinfrastructure. The plan deals with major failures ofthis infrastructure, also those caused by fires andother disasters, as well as other business critical func-tions. If we regard the publicly provided communica-tion services, the traditional services and their provi-sion are/have been independent, and the services areto some degree substitutable. For instance, if “theInternet does not work” we send a fax or make aphone call. This has contributed to the reduction ofthe consequences of failures. However, with theongoing convergence of “Internet-services”, fixedand mobile telephone, broadcast etc., this option maybe reduced or removed due to common infrastructureand highly interwoven services. Companies that arehighly dependent on communication services arecommonly advised to obtain duplicate services fromindependent providers. However, this does not guar-antee independence, since in the current telecommu-nication marketplace, these may buy services fromcommon third parties.

ISSN 0085-7130 © Telenor ASA 2004

Page 34: 100th Anniversary Issue: Perspectives in telecommunications

32 Telektronikk 3.2004

2 The technological evolution andsome of its implications

A low dependability is often accepted from new tech-nological inventions providing a substantially newfunctionality. After this first “toy phase”, as the useand our dependency on the functionality increase, therequirements become stricter. Regard cars and planesas examples. Furthermore, history has shown thatwhen failures may have severe consequences, wetend to become conservative with respect to introduc-ing new solutions when a proven one exists. In thissituation the technology deployment trend tends to bemore toward marginal improvements than to intro-duce radically new solutions. Within information andcommunication services, we are simultaneously atboth ends, as well as in the middle, of this maturityscale. Users “toy” with new Internet services havingdependability attributes orders of magnitude less thanfor instance the ‘always working’ fixed telephonyservice. From a dependability point of view this mixof proven and immature technology, between no andstrict requirements poses major challenges. The textbelow will deal with some of them.

An old word of wisdom is that dependability cannotbe added to a system after it is designed, it has to bean integrated aspect of the design from the beginning.This may easily be forgotten in the current eagernessto bring new services and networking conceptsswiftly to the market, which may give backlashes

when the requirements move from the toy stage tothe necessity stage.

Several of the ongoing techno-econo-commercialtrends in the evolution of information and communi-cation systems have a significant indirect impact onthe dependability of these systems, but first, a briefglimpse at the history.

2.1 History

Telephony started out as a toy, but soon reached atechnical maturity and a widespread use that has leadthe users to expect it to always work. The leadingunavailability requirement for telecommunicationsystems of “less than two hours of accumulated sys-tem down time per forty years”, corresponding tothree minutes per year, was originally a design goalset for the ESS 1 in the early 1960s [8]. This goal wasset to match the empirical unavailability of the thencurrent technology, the electromechanical crossbarswitching system. To keep the perspective; in the mid1980s the major manufacturers of SPC switching sys-tems reported that this goal was reached. This designgoal is also presumed to be the initiator of the oftenclaimed five nines in other systems5). For further his-torical summary of dependability of (voice) commu-nications, see [26].

The other “ancestor” of the communication network,the Internet, had its origin in the fear of nuclear

Human

Causes

Operation

Interaction

Fault Tolerance

Contingency and

(disaster) recovery planning

Consequences

Permanent

Design

Interaction

External

Internal

Fault Prevention

SystemPhysical

Spesificat.

Design

Implement

Transient

Intermittent

Faults

Errors Failures

5) Unavailability: 2 hours / 40 years = 3 minutes / 1 year = 5.7 10–6 ≈ 10–5; Availability: 1 – 10–5 = 0.99999

Figure 5 Approaches to dependable systems

ISSN 0085-7130 © Telenor ASA 2004

Page 35: 100th Anniversary Issue: Perspectives in telecommunications

33Telektronikk 3.2004

attacks during the cold war. To cite Paul Baran in oneof the groundbreaking works toward the Internet, alsoin the early 1960s [4]:

“We will soon be living in an era in which wecannot guarantee survivability of any single point.However, we can still design systems in whichsystem destruction requires the enemy to pay theprice of destroying n of n stations. If n is madesufficiently large, it can be shown that highlysurvivable system structures can be built, evenin the thermonuclear area.”

In addition to a highly interconnected structure whereany existing path between two users might be used,dependability was ensured by protection of packeteddata by error detection codes and protocols handlingretransmission etc. of corrupted or lost packets. Intoday’s effort toward making the Internet the com-mon transport platform for all services, it should bekept in mind that it was designed for high survivabil-ity and not for real time services or as a “minimumcost” infrastructure. For a history of the Internet see[24].

No quantitative dependability objectives have beenformulated for the Internet. However, the ideas fromits origin of inherent robustness and survivabilityhave prevailed. Hence, it provides a fairly gooddependability, in spite of its heterogeneity, rapidgrowth and rather uncoordinated operation and man-agement. The unavailability of its backbone isreported to be in the order of 10–4, which is still anorder of magnitude poorer than the corresponding fig-ure for the telephone network, 10–5 [22].

Mobile communications poses additional challengesto the dependability. With user and environmentmobility we get added complexity and a more com-posite operator/provider setting. Adding terminalmobility, we also have an unreliable radio accesschannel between the user equipment and the network.Limited coverage and handover failures may also dis-rupt the continuity of service. In the design of currentand future wireless systems, measures have beentaken to deal with these specific issues, but to citeH.A. Malec: “There was no indication that a majorconcern for the design of the 3G was reliability per-formance” [26].

2.2 Integration and differentiation

We are likely moving toward a truly service inte-grated network, i.e. one physical and logical inte-grated infrastructure providing all services. However,the dependability requirements of the various serviceswill differer widely, e.g. leisure browsing vs. emer-gency services and business critical services. Some of

these aspects are tentatively illustrated in Figure 6. Inparticular, there is a difference in the various user(groups) willingness to pay for a higher quality of thesame service. A high dependability comes at a highcost with respect to equipment volume, developmenteffort, management and maintenance. The additionalcost (as a rule of thumb, at least 50 % of the totalcost) needed to reach today’s telephone network stan-dard, is likely to be too high for ‘unimportant’ ser-vices and cost aware users. At the other end of therequirement scale, since telecommunications is animportant part of the public infrastructure, regulatorsmay set requirements on the dependability of someservices. Hence, an (increased) differentiation ofdependability with respect to kind of service, usergroup or user importance/willingness to pay isexpected in the future. A first step in this direction isto obtain an idea of the value of various services tovarious user groups as illustrated in Figure 7. Howdependability differentiated services technologicallyshould be provided, is an open issue. Further on,there are even more challenging commercial andoperational issues that must be solved. To make atransition to a differentiated dependability regime,

Figure 6 Illustration of differentiated dependability requirements. Theshaded areas represent acceptable areas for various kinds of services

Unavaila

bility

require

ment

Mean service down time

Me

an

tim

e b

etw

ee

n s

erv

ice

in

terr

up

tio

ns

Expected/dimensioned value

"Guarantee" in SLA

1/U

ti

td

High availability

service

Strict requirements

for uninterupted service

ISSN 0085-7130 © Telenor ASA 2004

Page 36: 100th Anniversary Issue: Perspectives in telecommunications

34 Telektronikk 3.2004

solutions adding a minimal complexity and allowinga gradual introduction are need. As corollary, solu-tions should be sought that inhibit inherent unreliablecomponents and services from affecting services withstricter requirements.

Differentiation will add to the technological and oper-ational complexity discussed in Section 2.3. In theintegrated network, we will have a co-existence of

• services with highly differing requirements utiliz-ing the same set of system components;

• system components with highly differing depend-ability attributes; and

• network operators and service providers withhighly differing ability to deliver dependableservices.

These items should also be kept in mind when ambi-ent intelligent services are considered in Section 2.6.

2.3 Technological and organizational

complexity

The increasing complexity of the systems and theconsequence that human-induced faults constitute the

major threat are already discussed above. Copingwith the complexity and keeping the number of faultslow have been a major effort in software engineeringfor the last decades. Although one is not entirely ontop of the problems, and there are exceptions, we arereasonably able to cope with the dependability ofsoftware (sub)systems by fault prevention and re-moval, cf. Figure 2. Preventing failures due to designfaults will be a major challenge, also in the future.However, operational faults have become the majorcause of severe system failures. Operational faultsand corresponding means to avoid failures have gotthe least research attention hereto, and should beplaced in focus. The number of services, the potentialservice interaction problems, the heterogeneity andsize in number of HW and SW network componentswill increase this liability for design and operationalfailures in the future.

A related issue to technical complexity of the systems,is the complexity of the commercial marketplace afterthe demonopolization of traditional telecommunica-tion services, the growth of the commercial Internet,and the convergence with information services andentertainment. The “network” will be provided by alarge number of (simultaneously) co-operating andcompeting actors, some having a vertically integratednetwork providing a set of services within a geo-graphical area, others providing transport or serviceswithin one or a few segments or layers, some provid-ing end-user services worldwide, etc. In general,there will be many actors involved in providing anend-user service. Mobility increases this multipleactor involvement. In this setting it will be extremelydifficult to guarantee a specific dependability to theend user. A management of the dependabilityattributes delivered, as well as other QoS attributes,will require extensive co-operation among the actors.An anticipated element in this co-operation is exten-sive service level agreements (SLAs). These definethe QoS that is going to be delivered among them,how to measure the delivery and reaction schemes ifa party does not comply, e.g. some sort of economiccompensation [18]. In this context, dependabilityQoS attributes pose a special challenge, since theirtypical or average values are impossible to measureon a short time scale, for instance in less than a year.

In addition to the professional/commercial actors,there is a trend toward private and non-commercialentities being involved in the provision of services,e.g. ad hoc and peer-to-peer networks, as discussedbelow. An interweaving of elements from these and“professional” services is foreseen.

Figure 7 What are the values of the various services to their users?(NB! Do not interpret the mapping directly)

Unavailability

Emergency

servicesVideo on

demand

Telephony

Business

10-5

5 min/yr

10-4

53 min/yr

10-3

9 hr/yr

10-2

88 hr/yr

Telephony

domestic

E-commerce

B-to-B

E-commerce

Vendor

Downloads

E-commerce

Domestic

customer

Inter-

active

games

“Value”

ISSN 0085-7130 © Telenor ASA 2004

Page 37: 100th Anniversary Issue: Perspectives in telecommunications

35Telektronikk 3.2004

2.4 The self organizing networks

The area of information and communication systemsand services is still getting more comprehensive. Dur-ing recent years the fields of peer-to-peer networking,(non-military) ad-hoc networks, ubiquitous comput-ing and grid computing have emerged and started togrow. Most of these are still at the “toy stage” anddependability is not a major concern for the pilotusers. However, there is ongoing research on thiskind of systems/networks where dependability is animportant issue. See for instance [27], where theMAD properties Mobility, Adaptivity and Depend-ability are introduced as crucial for these systems.Common to self organizing networks is the lack of asupervisory, coordinating and managing entity and astable structure. A substantially higher degree of selfmanagement or autonomy, than found in current sys-tems, is required. These networks have no authoritythat ensures their dependability or other QoSattributes. Hence, an application/system function thatrequires a certain dependability must interact with itsnetwork environment, assess the capabilities of theenvironment, and use the current computing andcommunication surroundings in a way that ensuresthat requirements are met.

This, together with the manpower issues discussedbelow is a major driving force toward expectingfuture systems and system components to becomemore autonomous.

2.5 Manpower

Correct configuration and management are key issuesfor having dependable systems. As pointed out above,failures due to faults induced through configurationand management are currently perhaps the most salientcontributor to system failures. Today, dealing withthese tasks usually involves personnel. The requiredcompetence of this personnel increases as the systemsget more complex, and competent personnel, both interms of number and skill, become the limiting factor.

The manpower shortage anticipated for the ICT busi-ness will be driven by configuration and managementeffort rather than by development. The developmenteffort grows roughly in proportion with the numberof network components developed (HW and SW).However, each type of component is installed in ahuge number of copies, and the configuration andmanagement effort is at least growing with the prod-uct of the number of components and their number ofcopies. Hence, without a shift in configuration andmanagement paradigm, the demand for highly skilledpersonnel will limit growth and the dependability ofthe services provided. This is one main motivation forresearch on autonomous systems, e.g. [19, 1]. In addi-tion to adaptivity and flexibility, intrinsic planning

and dimensioning as well as lower cost to the usermay be achieved. For the future services based onambient intelligence, it seems to be a prerequisite tohave autonomous configuration and management.

2.6 The challenge of the ambient

intelligence

Up to now, we have discussed information and com-munication systems roughly in the context we knowthese systems today. What is foreseen in the future hasbeen characterized with many cognomens, e.g. ubiqui-tous computing and communication, pervasive com-puting and ambient intelligence. An integration of sen-sors and processors into everyday objects like clothing,white goods, toys, furniture, etc. as well as for specialpurposes as “health sensors” in our bodies is predicted.These objects will be enabled to communicate witheach other and the user, for instance by means of ad-hoc and wireless networking. “Natural interfaces” willallow the users to control and interact with the environ-ment by voice, gestures, etc. in a personalised waydepending on preferences and context. User interactionwill apply new visualisation devices ranging from pro-jections directly into the eye to large panorama dis-plays. Information and communication technology isexpected to be everywhere, for everyone, at all times.

Some of the tasks (services) expected to be handledby the ambient intelligence are mission or safety criti-cal; for instance, supervision and support of personswith life threatening diseases, extensive economictransactions, support of emergency operations andvehicle control. Failures may open for theft of iden-tity and values, as well as catastrophic second orderfailures. For other tasks, failures may lead to variousdegrees of inconvenience, ranging from no food andheating to that of the favorite music not playing.Although, even for the systems without explicitdependability requirement, the dependability isexpected to be important just to be profitable ina competitive low margin market – failures cost.

Regarding the dependability challenges in providingthese services, it should also be kept in mind that:

• The same kind of “infrastructure” shall support allthese kinds of tasks. In an (unregulated) competi-tive setting, it is unlikely that costly redundancywill be provided for fault tolerance.

• The hardware and software elements of these sys-tems are expected to be inexpensive COTS (com-mercial off-the-shelf) components.

• The systems are characterized by a high complex-ity, heterogeneity, and dynamism in applicationmix and structure.

ISSN 0085-7130 © Telenor ASA 2004

Page 38: 100th Anniversary Issue: Perspectives in telecommunications

36 Telektronikk 3.2004

• There will be a wide range of network operatorsand service providers; from professional corpora-tions, via inexperienced startups to unskilledprivate individuals.

• There will be no system administrator (in manycases not even a man in the loop to detect failures).Autonomy in (re)configuration, fault detection andmost of the fault handling is a prerequisite.

It is seen that the dependability challenge of theambient intelligence also includes aspects of whatis discussed in Sections 2.3, 2.2, 2.4 and 2.5 above.If we in addition include the security challengesimposed by the ambient intelligence concept, it isseen that new architectures are required to be trust-worthy. Work is going on to be more specific aboutwhat the threats are and where the research effortshould be put [28].

An implementation of ambient intelligence that wemay trust, rely on the ability to dependably access thecore network and remote services. To achieve this,access networks must deliberately be built to enableindependent diverse access channels from (critical)end user devices to the core network. Radio accesswith overlapping cell structures or similar, in additionto wired access, may provide this diversity. However,high dependability requires that this is taken intoaccount in the architecture and design, as well as inthe topology of the network covering an area. Han-dover to maintain continuity of services and in somecases diverse access is another unsolved issue. Torelate this to what was discussed in Section 2.3, thesecoordinated overlapping access options may be givenby independent market actors.

From the above, it should be evident that we have awide range of challenges with respect to providingtrustworthy end-user services in the future ambient,integrated, multi-technology and multi-provider envi-ronment. In the next part of the paper, we regard thechange of generation of technology which has alreadystarted in the core communication infrastructure.

3 The core communicationinfrastructure

The dependability of the core communication infra-structure is very important, since outages will haveimmense failure effects/consequences. As pointed outin Section 1.3, there exist successful techniques fortolerating physical faults, both in the transport net-work, as well as in the computing platforms used tocontrol and manage the network, and to provide ser-

vices. Below, the ongoing trends for the transportnetwork will be reviewed, and in Section 3.2 thecomputing platforms will be dealt with.

3.1 The transport network

A brief review of the strategies to obtain resilience,i.e. fault tolerance in transport networks is givenbelow, before we introduce the multilayer designof core transport networks and discuss the foreseenfuture developments.

Resilience mechanisms

The term resilience characterizes the ability to contin-uously provide service in spite of complete or partialfailures of network elements.6) The following designparameters should be considered in the choice ofmechanism:

• The additional network elements and the capacityneeded;

• The time from an element failure until the transportservice is restored. An interrupt shorter than 50 msis considered to be without significant impact.(Originally, this requirement stems from telephony,other applications/services may of course set otherrequirements);

• The organization of fault handling and resourcemanagement;

• The ability to provide differentiated dependability;

• The dependability of the mechanism itself.

This section introduces protection, reconfigurationand a few self-healing strategies for making networksresilient, and discusses their merits.

• Protection. A dedicated spare path is establishedbetween the end nodes of the protected path or sub-path. There may be intermediate nodes as indicatedin Figure 8, but these do not perform any switchingor rerouting of the protected traffic. Protectioncomes in two variants, depending on whether thespare is active or not:

- 1+1 protection: The information (bits, packets,cells, …) is sent synchronously on both pathstowards the destination. The destination selectsthe stream it considers as the one with the higherintegrity, e.g. fewer bit errors. Immediate (a fewms) handling of path failures, which allows themto go unnoticed by the users, is feasible.

6) Common synonyms of resilience are network robustness, fault-tolerance and survivability.

ISSN 0085-7130 © Telenor ASA 2004

Page 39: 100th Anniversary Issue: Perspectives in telecommunications

37Telektronikk 3.2004

- 1:1 and N:M protection: These are stand-by pro-tection strategies, where one (N) dedicated spare(or back-up) path(s) is/are available for one (M)active primary path. In case of failure, the receiv-ing node must inform the sending node, whichredirects the data stream. This incurs a temporaryloss of service (10 – 100 ms) usually tolerated bythe users.

Protection is simple, fast and proven, and has lesscostly control. It is the “workhorse” resilience strat-egy of today’s transmission networks and is stan-dardized for ATM and SDH networks [16, 17].However, it requires more transfer capacity and isless flexible with respect to dynamic changes anddifferentiation than the alternatives outlined below.

• Reconfiguration is a strategy based upon a central-ized management of the network. A network man-agement system, see Figure 9, supervises and con-trols all network resources. It keeps a map of them,their utilization and status, etc. Based on this infor-mation it sets up transmission paths between nodesin the network. When a network failure occurs thenetwork management system reconfigures the rout-ing through the network. See for instance [6] for acase with preplanned reconfigurations.

Reconfiguration has the potential of being flexibleand using the transmission capacity very efficientlydue to its ability to perform a global optimization.It may, however, be slow, in the order of minutes,and is vulnerable, since its centralization yields asingle point of failure and it depends on informa-tion transfer in a network with failure(s) not yethandled.

• Self-healing is used as a common denominator forseveral strategies which have distributed controland require no dedicated pre-reserved transmissioncapacity. See for instance [7] for additional mate-rial.

- Primary with backup paths. When a path (theprimary) is established through the network oneor more disjoint backup (stand-by) paths areestablished simultaneously, see Figure 10. Thesedo not exclusively reserve any link capacity anddo not carry any traffic unless a node or linkalong the primary path fails. We have a researcheffort on finding such paths distributively byemergent behaviour, see for instance [36]. Tech-nologies like ATM and MPLS are suited forestablishing such paths, which also enable loaddistribution and differentiation.

- Flooding is a fully distributed network manage-ment technique to restore communication follow-ing an element failure [33, 11]. The techniquefloods the network with requests in a search foravailable capacity to replace that of a failed link.Flooding, especially end-to-end restoration posesa number of algorithmic problems. Restorationtime is slow and no dependability/ QoS guaran-tees may be given.

- Rerouting is a self-healing strategy relying on thetraffic handling mechanisms of the network andthe user equipment to re-establish individualpacket flows/connections after they have been

Figure 8 Protection of communication between nodes A and B. 1+1protection when the information flow simultaneously along both paths,1:1 protection when one acts as a back-up for the other

Figure 9 A centralized fault management gather information from allnetwork elements, and, in case of an element failure, reconfigures thenetwork

A

Disjoint physical paths

B

Faultmanagement

function

ISSN 0085-7130 © Telenor ASA 2004

Page 40: 100th Anniversary Issue: Perspectives in telecommunications

38 Telektronikk 3.2004

disrupted by a network failure. Internet (re)rout-ing is a well known example. This strategy dif-fers from the self-healing strategies presentedabove, as well as protection and reconfigurationwhere dedicated fault management entitiesrestore the paths within the network for a bulk offlows/connections. Fault handling by reroutingcannot be ‘invisible’ to the user and no depend-ability/QoS guarantees can be made. The delaybefore a flow or connection is restored may besignificant.

Rerouting, flooding and reconfiguration without pre-pared plans are denoted restoration techniques as theyseek to find new paths and restore the broken trafficflows after a failure, while the others have advanceplans, i.e. are proactive. Some of the above strategiesmay be suited for both span reestablishment, wherethe affected traffic are diverted locally around thefailed network element (usually a link), and for end-to-end reestablishment, where a path is (re)estab-lished between the source and destination node in thenetwork. End-to-end reestablishment yields a betterdiversion of the traffic and has less requirements forspare capacity.

The properties of the fault handling and redundancystrategies presented above are summarized in Table 1.The entries should be considered as indicative. Awide range of values may be seen, especially forreconfiguration and flooding. All the strategies maybe used to introduce dependability differentiation,however, only the primary/backup approach enablesthis without introducing additional control functional-ity and/or interactions between currently independentcontrol and management functionality.

The “hard wired” protection schemes are fast, simpleand proven, but may be costly in terms of spareresources required. They are also inflexible withrespect to rapid changes in topology and traffic pat-tern as well as differentiation. In an Internet basedplatform, the advantages of protection will be bal-

Figure 10 Illustration of shared capacity for backup-paths. The primary paths between i and l, and j and nhave no network element in common, and areassumed to fail independently. Hence, they share theunused capacity on link between j and k for backup

Resources Control Speed Dep. diff. Spare reqmnt.

Dedicated Preplanned On demand Centralized Dist. Control Dist. man.

Protection X X Fast No Large

Reconfiguration X X X Medium – No Small

Primary-backup X X X Fast Yes Small

Flooding X X Medium + No Medium

Rerouting X X Slow No Small

Table 1 Organization and properties of various network fault handling strategies

j

i

m

k

n l

Primary path

Backup path

Capacity of

link j, ki, l j, n

j, k

Traffic load btw. nodes x and y

x, y

a) Primary and backup paths between i and l, and j and n

b) Capacity allocation on link j, k

ISSN 0085-7130 © Telenor ASA 2004

Page 41: 100th Anniversary Issue: Perspectives in telecommunications

39Telektronikk 3.2004

anced against the gain of rerouting and MPLS basedprimary backup schemes. The reduction in number ofmanagement layers discussed below will invigoratethe latter alternatives.

Layering

Transport networks have a layered design, forinstance as illustrated in Figure 11. The physical bot-tom layer provides a link between nodes in the layerabove, which again may provide logical links to thelayer above, etc. Note that the topology of the net-works seen at the various layers may differ. Each ofthe layers typically has its own technology adapted toits purpose. Figure 11 indicates four layers with cor-responding technologies familiar to most readers,where optical switching, OxS, is on its way into thenetwork.

When ensuring dependability in this multilayerarchitecture, we are met with a number of questions:a) In which of these layers should we introduce theability to tolerate physical faults of the transmissionmedium? b) Which resilience strategy and mecha-nisms should be used? c) How should the fault man-agement in the various layers be co-ordinated? Theanswers to these questions are complicated by thefact that, in contradiction to what is illustrated in Fig-ure 11, networks are not homogeneously and strictlylayered; the structure may vary in adjacent auton-omous systems and the lower layers may carry sev-eral independent transport services7) as illustrated inFigure 12.a. Furthermore, not all transport services(or end-users) have the same dependability require-ments and hence the same need for fault-tolerance inthe network.

As indicated in Figure 12.a each “layer” has its indi-vidual resource management. This hampers the coor-dination in provisioning dependability differentiatedservices, it complicates the fault management since itis necessary to avoid several layers competing to han-dle the same failure8). An ‘independent’ design andoperation of each layer may also introduce excessive(or missing spare) capacity. For discussions on designof multilayer networks and on which layer that it ismost profitable to introduce redundancy, see forinstance [20, 23, 13, 35]. Some rules of thumb:

1. The higher up in the network, the greater the flexi-bility, and the spare resources needed are likely tobe less;

2. The higher up in the network, the longer the delayfrom the failure event until service is restored;

3. Low layer redundancy handling tends to introduceleast complexity since the mechanisms may besimpler and the redundancy handling may be com-mon to a number of higher layer network services;

4. Fault tolerance on a layer can only handle faults atits level and on the levels below. Hence, somefault tolerance is always required at the highestlevel.

There are strong arguments for a reduction of thenumber of layers and for having a co-ordination ofthe resource management in the layers. At the sametime it is advocated that an IP based network with(G)MPLS and QoS extensions should provide alltransport services, i.e. the backbone networks tendtoward getting an organization as illustrated in Figure12(b). This should in principle enable a betterresource utilization and less cost. An aspect of thiscommon management architecture is that it becomesfeasible to tailor the dependability provided to thatrequired, cf. Figure 6, by using a mixture of resiliencemechanisms like the primary-backup approach andrerouting as well as preemption of low requirementtraffic. However, with the observations of the currentInternet in mind, e.g. [22], remembering that the cur-rent Internet to a large extent is supported by the pro-

7) By transport service in this context is meant the transport of information through the network, not the service provided by a transportprotocol.

8) Nested hold-off timers is the commonly recommended approach, see for instance [23].

ATM

IP/MPLS

OxS/WDM

SDH

Figure 11 Layers in a backbone transport network

ISSN 0085-7130 © Telenor ASA 2004

Page 42: 100th Anniversary Issue: Perspectives in telecommunications

40 Telektronikk 3.2004

tection mechanisms in the SDH network, and that theInternet resilience technology required currently is atthe R&D stage, care must be taken in this technologytransition. Otherwise, we may move from a situationwhere physical faults in the transport network is amoderate contributor to lack of dependable services,cf. Section 1.2, to a situation where they may becomea major problem.

3.2 Control, management and service

provision

There are two basic strategies for achieving fault tol-erance in the computing platform providing control,management and provision of services. See Figure 13for illustrations.

1. To use fault-tolerant network nodes, i.e. to makethe computing and database platforms supportingthe functionality of the network nodes fault-toler-ant, having redundant computing and/or storagecapacity in each node. The replicas of the pro-cesses are executed or stored within the samenode. For instance, to make an intelligent networkSCP fault tolerant, a duplicated synchronous com-puter may be used to execute the functions.

2. To introduce fault tolerance at the network level,i.e. to have co-operating replicas of softwareobjects/processes providing network functions inseveral nodes and thereby enable tolerance of nodefailures. Network level fault-tolerance is based ondependable distributed computing technologies.

The legacy system

The legacy switching nodes and network centricserver nodes used in PSTN uses strategy 1. The archi-tecture of the nodes may vary. Fault-tolerance may

Figure 13 Various options for providing fault tolerant computing plat-forms. At Site I, the node itself is made fault tolerant by duplication, i.e.replicas of processes A and B run synchronously on two computers. Inthe next option, the distributed computing paradigm is applied, and thesame processes are distributed over Sites II, III and IV. The last optionillustrated is a computing cluster at Site V, where a mix of strategiesmay be applied

Figure 12 Current and intended future relation between transport network layers providing transport services

Other switched

transport

services

Other

physical

SDH

OXS/WDM

IP/MPLS

ATM

n*64 CS

Other

physicalOXS/WDM

IP/MPLS

Transport service provision

Resource management

(a) Interaction between network layers (b) Presumed future layering

Site I Site II Site III

Site IVSite V

Transport network

Z

A

X

B

B

AA

B

A

ISSN 0085-7130 © Telenor ASA 2004

Page 43: 100th Anniversary Issue: Perspectives in telecommunications

41Telektronikk 3.2004

for instance be achieved by micro-synchronous dupli-cation, as in Site I of Figure 13, e.g. the design usedin AXE [32]. Alternatively a distributed and mainlyload-shared architecture, as in System 12 [5] has beenapplied. This architecture has commonalities with thecluster architecture illustrated at Site V in Figure 13,discussed below. See for instance [2] for a presenta-tion of the design and analysis of such systems. These“legacy” architectures have proven their ability toprovide highly available services. In addition, thisapproach has several other advantages. In most sys-tems, it hides the fault-tolerance mechanisms for theapplication designer. The synchronization betweenreplica incurs little or no delay, and the strict realtime QoS requirements of communication systemsare met. However, these systems are dedicated sys-tems and are more expensive than off-the-shelf com-puter and database systems. For some architecturesscalability may be an issue.

The distributed platform

The alternate strategy 2 is illustrated in Figure 13,Site II – IV. The computing and storing hardware ofthe nodes in the network are not fault tolerant. Theservices delivered by the network, however, may befault tolerant by having replicas of the various serviceproviding software objects/modules in several net-work nodes. For instance, in Figure 13 there are twosynchronous replicas of object/module A. Thesereplicas have consistent states and a failure of one ofthe replicas, or the node hosting it, may be tolerated.The replication may be handled by appropriate mid-dleware, see for instance [25]. As in strategy 1, thisrequires redundant computing and storage capacity inthe network. However, off-the-shelf equipment maybe used, the redundancy installed for the variousobjects/modules may be tailored to the dependabilityrequirements of the services they provide. Thereby,an increased flexibility and a reduced cost may beachieved.

It is a common objective that replicas shall be loca-tion and fault transparent, i.e. clients/users of the ser-vices shall not need to know where the replicas arelocated or whether some of the replicas have failed.Achieving a flexible replication scheme and consis-tent replicas across an unreliable network is no sim-ple task. For more than two decades, considerableresearch effort has been put into these problems.Tool-kits and prototype systems have been devel-oped, cf. for instance our effort, Jgroup/ARM [29].A standard for fault tolerance in CORBA exists [30].This approach was also an element of the Telecom-munication Information Networking Architecture(TINA) [34].

So will fault tolerance at the network level and thedistributed computing paradigm rule in the future? Ifthis shall be the case, the fault-tolerance and replica-tion strategies applied, must be tailored to the require-ments and adapted to the particularities of telecom-munications. The critical issue is to keep replicaswhich may substitute for each other consistent, i.e.synchronized, without incurring intolerable delaysand increase in workload. Some sample issues:

• QoS requirements put rather strict real timerequirements on many tasks, while synchronizationbetween distributed replicas may be time consum-ing, especially on non-broadcast media like WANs.

• Many tasks have short processing times comparedto synchronization times between replicas, hencean approach must be chosen so an immense over-head can be avoided.

• Reduce the state-information that needs to be per-sistent in case of failure and require synchroniza-tion. (Some of the Internet’s success as a transportnetwork may be attributed to its statelessness.)

Systems based on the distributed computingparadigm have not yet demonstrated an operationaldependability which matches that of “traditional”systems. This delays the adoption of “distributedcomputing” in telecommunications. It should also beremembered that the middleware platform itself maybecome a dependability bottleneck.

The dedicated server (cluster)

In the edge centric Internet approach, dependabilityrequirements for the end-user services have typicallynot been explicitly stated. The dependability objec-tives are normally set by the provider as a trade-offbetween the loss in case of failure and the cost ofincreased dependability. The servers have typicallybeen built with inexpensive off-the-shelf hardwareand with an architecture adapted to the functions/ser-vices that should be provided. This is also the case forfault tolerance aspects, where the design has been ad-hoc, using a mix of proprietary solutions, as well aselements from distributed computing and commodityfault tolerant subsystem like databases. As some ser-vices have become big business, the server systemshave become huge, and due to size they have frequentelement failures. Hence, high availability provided byfault tolerance has become an important issue, see[31] for an interesting discussion. A commonalityamong these systems is a stateless front end whicheasily may share the processing load and state-fullback-ends with fault tolerant stable storage avoidingmany of the synchronization problems discussedabove. Located at a single site these systems are vul-

ISSN 0085-7130 © Telenor ASA 2004

Page 44: 100th Anniversary Issue: Perspectives in telecommunications

42 Telektronikk 3.2004

nerable to environment failures, like fire, and hence,they may have copies at other geographical sites,eventually loosely synchronized and operating inload sharing.

At the moment it seems as if dedicated server, controland management systems with architectures targetedat their purpose and dependability requirements, builton commodity hardware and subsystems, is a likelycontinuation of the evolution. Hence, a migration tosystems where all functions are placed transparentlyon a general distributed platform, as for instance fore-seen in the work towards TINA (the Telecommunica-tion Information Networking Architecture) [9], is lesslikely in the near future.

4 Summary and concluding remarksIt is predicted that in the future, information and com-munication services will be far more important andinterwoven into our lives and societies than they aretoday. That we must be able to trust many of theseservices is obvious, i.e. they must be dependable. Thetrend is towards a single future integrated networkproviding these services. This leads to an increasedvulnerability to failures of “the network” as substitute“services” vanish. High dependability comes at acost. Hence, it is likely that we have to be moreexplicit about dependability requirements, and thatthere will be a differentiation with respect to depend-ability between services in and users of the integratednetwork.

The overall complexity of the information and com-munication infrastructure is increasing, both withrespect to technical solutions and with respect to themultitude of market actors in the future, converged,integrated service provisioning scenarios. To copewith this complexity is a major challenge. Hereto,complexity has primarily been considered as a systemdesign issue. However, there are indications thatmanaging the complexity will become even moreimportant in operation and maintenance. Increasingautonomy in system management is the likely meansto deal with this.

The widespread use of ambient intelligence for otherthan toy services requires a huge effort in ensuringthe dependability. At present, neither the require-ments nor the means are fully understood. The coreinformation and communication infrastructure willalso be of major importance. In this domain, therequirements and solutions are better understood, atleast for dealing with physical failures. With respectto computing platforms, it seems that the currenttrend is a pragmatic approach. Commodity equipmentand subsystems, elements from distributed comput-

ing, dedicated designs, etc. are used to achieve adependability tailored to the application specificrequirements. In the transport network the trend istoward fewer layers and an integrated management.Generally speaking, there is a shift in technologywhich yields a higher flexibility, better resource utili-sation and less cost. However this may add unman-ageable levels of complexity, and care must be takennot to abandon proven and stable technologies toorapidly. The need for a stable communication infras-tructure favors incremental changes rather than radi-cal shifts in technology.

AcknowledgementThe author would like to thank the Editor, Poul E.Heegaard, Svein J. Knapskog, Victor Nicola and OttoWittner for comments on an earlier version of themanuscript.

References1 Aagesen, F A, Helvik, B E, Anutariya, C, Shiaa,

M M. On adaptable networking. In: The FirstInternational Conference on Information andCommunication Technologies, ICT’2003. April2003.

2 Ali, S R. Digital Switching Systems; System Reli-ability and Analysis. McGraw-Hill, 1997.

3 Avizienis, A, Laprie, J-C, Randell, B. Fundamen-tal concepts of dependability. In: Position Papersfor the Third Information Survivability Workshop,ISW-2000. The Institute of Electrical and Elec-tronics Engineers, Inc, October 24–26, 2000.

4 Baran, P. On distributed communications net-works. IEEE Transactions on Communications,12 (1), 1–9, 1964.

5 Beyltjens, van Houldt. System 12, switching sys-tem maintenance. Electrical Communication, 59(1/2), 80–88, 1985.

6 Coan, B A, Leland, W E, Vecchi, M P, Weinrib,A, Wu, L T. Using distributed topology updateand preplanned configurations to achieve trunknetwork survivability. IEEE Transactions onReliability, 40 (4), 404–416, 1991.

7 Decina, M, Plevyak, T (eds). Special Issue: Self-Healing Networks for SDH and ATM. IEEECommunications Magazine, 33 (9), 1995.

ISSN 0085-7130 © Telenor ASA 2004

Page 45: 100th Anniversary Issue: Perspectives in telecommunications

43Telektronikk 3.2004

8 Downing, R W, Novak, J S, Tuomenoksa, L S.No. 1 ESS maintenance plan. The Bell SystemTechnical Journal, 43 (5), 1961–2019, 1964.

9 Dupuy, F, Nilsson, G, Inoue, Y. The TINA con-sortium: Towards networking telecommunicationinformation services. IEEE CommunicationsMagazine, 33 (11), 78–83, 1995.

10 Laprie, J-C (ed). Dependability: Basic Conceptsand Associated Terminology. Dependable Com-puting and Fault Tolerant Systems, vol 5.Springer, 1992.

11 Egawa, T, Komine, T, Miyao, Y, Kubota, F. QoSrestoration for dependable networks. In: NetworkOperations and Management Symposium, 1998 –NOMS 98, IEEE, vol 2, 503–512, 1998.

12 Emstad, P J, Helvik, B E, Knapskog, S J, Kure,Ø, Perkis, A, Svensson, P. Annual Report 2003;Centre for Quantifiable Quality of Service inCommunication Systems, chapter A Brief Intro-duction to Quantitative QoS, 18–29. NTNU,2004.

13 Nederlof, L et al. End-to-end survivable broad-band networks. IEEE Communication Magazine,33 (9), 63–70, 1995.

14 Gray, J. A census of Tandem system availabilitybetween 1985 and 1990. IEEE Trans. on Reliabil-ity, 39 (4), 409–418, 1990.

15 ITU-T. Terms and definitions related to qualityof service and network performance includingdependability. Geneva, 1994. ITU-T E.800(08/94).

16 ITU-T. Types and characteristics of SDH networkprotection architectures. Geneva, 1998. ITU-TG.841 (10/98).

17 ITU-T. ATM protection switching. Geneva, 1999.ITU-T I.630 (02/99).

18 Jensen, T, Grgic, I, Espvik, O. Managing qualityof service in multiprovider environment. In: Pro-ceedings from Telecom99. ITU, 1999.

19 Kephart, J O, Chess, D M. The vision of auto-nomic computing. Computer, 36 (1), 41–50,2003.

20 Krishnan, K R, Doverspike, R D, Pack, C D.Improved survivability with multilayer dynamicrouting. IEEE Communication Magazine, 33 (7),62–68, 1995.

21 Kuhn, D R. Sources of failure in the publicswitched telephone network. Computer, 30 (4),31–36, 1997.

22 Labovitz, C, Wattenhofer, R, Venkatachary, S,Ahuja, A. Resilience characeristics of the Internetbackbone routing infrastructure. In: Proceedingsof the Third Information Survivability Workshop,Boston, MA, USA, October 2000.

23 Lai, W, McDysan, D (eds). Network hierarchyand multilayer survivability. IETF RFC 3386,November 2002.

24 Leiner, B M, Cerf, V G, Clark, D D, Kahn, R E,Kleinrock, L, Lynch, D C, Postel, J, Roberts, L G,Wolff, S. A brief history of the Internet. 10 Dec2003.

25 Maffeis, S, Smidt, D C. Construction of reliabledistributed communication systems with CORBA.IEEE Communication Magazine, 35 (2), 56–60,1997.

26 Malec, H A. Communications reliability: a histor-ical perspective. IEEE Transactions onReliability, 47 (3), 333–345, 1998.

27 Malek, M. The NOMADS republic. In: Proceed-ings of International Conference on Advances inInfrastructure for Electronic Business, Education,Science and Medicine on the Internet (SSGRR2003), L’Aquila, Italy, 2003. Telecom Italia.

28 Masera, M, Bloomfield, R (eds.). A dependabilityroadmap for the information society in Europe;Part 1 – An insight into the future. TechnicalReport Deliverable D1.1., Project IST-2001-37553; Accompanying Measure System Depend-ability (AMSD), August 2003.

29 Meling, H, Montresor, A, Babaoglu, O, Helvik,B E. Jgroup/ARM: a distributed object groupplatform with autonomous replication manage-ment for dependable computing. University ofBologna and NTNU, 2002. Technical ReportUBLCS 2002-12.

30 OMG (Object Management Group). CORBA 2.5– Chapter 25 – Fault tolerant CORBA. September2001.

ISSN 0085-7130 © Telenor ASA 2004

Page 46: 100th Anniversary Issue: Perspectives in telecommunications

44 Telektronikk 3.2004

Bjarne E. Helvik (51) is Professor at the Norwegian University of Science and Technology (NTNU), Depart-

ment of Telematics. He is also with the Norwegian Centre of Excellence (CoE) in Quantitative Quality of

Service in Communication Systems. He is Siv.ing. (MSc) from the Norwegian Institute of Technology (NTH)

in 1975 and was awarded the degree Dr. Techn. in 1982. He previously held various positions with ELAB,

SINTEF Telecom and Informatics and as Adjunct Professor at NTH.

His field of interests includes QoS, dependability modelling, measurements, analysis and simulation,

fault-tolerant computing systems and survivable networks. His current research focus is on distributed,

autonomous and adaptive fault-management in telecommunication systems, networks and services.

He also deals with teletraffic issues and the architecture of networks and network based systems.

email: [email protected]

31 Oppenheimer, D, Patterson, D A. Architectureand dependability of large-scale internet services.IEEE Internet Computing, (2002), September-October 2002.

32 Ossfelt, B, Jonsson, I. Recovery and diagnosticsin the central control of the AXE switching sys-tem. IEEE Trans. on Computers, 482–491, June1980.

33 Struyve, K, Van Caenegem, B, Van Doorselare,K, Gryseels, M, Demeester, P. Design and evalu-ation of multi-layer survivability for SDH-basedATM networks. In: Global TelecommunicationsConference, 1997, GLOBECOM’97. IEEE, vol 3,1466–1470, 1997.

34 TINA-C. Quality of service framework. TINA-C(Telecommunication Informatiton NetworkingArchitecture consortium), 1994. Technical ReportTR-MRK.001-1.0-94.

35 Veitch, P, Johnson, D. ATM network resilience.IEEE Network, 11 (5), 26–33, 1997.

36 Wittner, O, Helvik, B E. Distributed soft policyenforcement by swarm intelligence; application toloadsharing and protection. Annals of Telecom-munications, 59 (1-2), 2004.

37 Xu, J, Kalbarczyk, Z, Iyer, R. Networked Win-dows NT system field failure data analysis. In:Proc. 1999 Pacific Rim Int’l Symp. DependableComputing. Los Alamitos, CA, IEEE CS Press,1999.

ISSN 0085-7130 © Telenor ASA 2004

Page 47: 100th Anniversary Issue: Perspectives in telecommunications

45

1 Entering the era of connectivityWe have been through different epochs in the evolu-tion of the information and communication technol-ogy (ICT). During the 1980s we digitalised thetelecommunications network and developed themodern computer; during the 1990s we explored therealm of information – how to create it, how to storeit, how to disseminate it, and how to retrieve it. Atthe beginning of the new millennium, we have juststarted to explore the vast area of how to interconnectthe trillions of autonomous CPUs we are placingeverywhere in homes, on and in our bodies, inoffices, in factories, in automobiles, in actuators andsensors of the infrastructures, in earth orbit, on Mars,and so on.

The infrastructure of society is changing rapidly, andso is the way in which we organise society. The inter-national society becomes smaller and is knit evenmore tightly together than before. The separationbetween us is no longer only “six degrees”1) inacquaintanceship, but in all dimensions of society.

Evolution is taking us towards a society immenselymore complex than it is today. This also introducesnew threats against society. The situation is soon – orperhaps already – such that the warmonger no longerneeds to deploy soldiers on foreign soil but justlaunch an attack taking down the infrastructure of theenemy from behind his desk. This can even be doneby a single individual at war by no one else than hisor her own mind.

This paper describes the nature of the new threat andwhat it may lead to if we are not careful. The idea isbased on mathematical theories discovered during thelast few years, and the analysis of the new threats andtheir consequences has just begun.

The paper is an odyssey through some of the develop-ments that have taken place during the last ten yearswhere the objective is to uncover some of the newthreats against the society evolution has created. Thenature of these threats must be recognised and under-stood before we can design the countermeasuresagainst them.

2 The anatomy of a paradigm shiftParadigm shift is a prestigious term that must be usedwith the utmost care. It is too easy to claim that aparadigm shift has taken place even if the change issmall, local, reversible and hardly recognised by any-one else than a small group of enthusiasts. Contraryto what many people believe, general relativity andquantum mechanics do not represent paradigm shiftsin physics. They are just refinements of classicaltheories: Newtonian mechanics is still valid providedthat the space where the phenomenon takes place isneither tiny nor immense.

A change in the direction of science or society is aparadigm shift if something huge and important hastaken place that has a lasting impact. In order to be aparadigm shift the transition taking place in society

Vulnerability exposed: Telecommunications as a hub of society

J A N A . A U D E S T A D

Jan A. Audestad

is Senior Adviser

in Telenor

Telektronikk 3.2004

During the last ten years, the information and communication technology has changed the way in

which we manage society in an irreversible manner. The technical and managerial infrastructures of

society have become very robust against random attacks by Nature, terrorists and others. On the

other hand, the infrastructures have become extraordinarily vulnerable to attacks directed at certain

functions or equipment – knowing how and what to attack may disrupt the basic services of society:

banking, energy delivery, logistics, production and telecommunications. It has recently been dis-

covered that this contradictory behaviour is a result of the structure of the physical and abstract net-

works interconnecting society. The structural principle is called scale-freeness. In scale-free networks

most of the nodes are interconnected with few other nodes, but there is a small but significant number

of nodes called hubs that are connected to a large number of other nodes. The Internet and the Web

are scale-free networks. In a graph interconnecting all societal activities, ICT represents the single

most important hub – if the ICT infrastructure is destructed, all of society collapses.

1) Milgram discovered in 1967 that we can pass a letter to a completely unknown person anywhere in the world via about six mutualacquaintances. This is called the small world syndrome. The degree of separation in the Web is 19 degrees; that is, the billions ofweb pages you can possibly reach are just about 19 mouse clicks away.

ISSN 0085-7130 © Telenor ASA 2004

Page 48: 100th Anniversary Issue: Perspectives in telecommunications

46 Telektronikk 3.2004

must be irreversible; that is, it is not possible – atleast without spending much energy – to return to thestate that existed before the transition started.

Such a transition has just taken place. We may call itthe computerisation of society. The transition is illus-trated in Figure 1.

The transition has taken place during the last tenyears. Ten years may look like a long period of timebut it is not. This is the time it takes to develop a newtechnology; it is the time it takes the market for a newtechnology to mature; it is the time it takes thematured market to be replaced by a radically newtechnology. The time constant of changes in societyis thus in the order of 30 years; i.e. the time constantis of the order of one generation. Often the changestake place over much longer periods of time than this.

The change that took place during the last ten yearsis also much more profound than most of the earlierchanges society has gone through. The gradient of thechange is therefore much steeper than we are used to.The change is also irreversible: we cannot easily goback to the methods by which we previously man-aged society since we neither have the proper tech-nical infrastructure nor the skills to run the earliersystems.

The extent of the change is illustrated in Figure 1.The event triggering the change was the World WideWeb. What made the change possible was that theperformance/cost ratio of computers had passed a

critical threshold: the personal computer had becomelarge enough for professional applications and cheapenough for everyone.

The change would have taken place despite the Webbut the start of it may have been delayed by severalyears and the dynamics of the change may have beenless ferocious.

The major elements of this change can be describedas follows.

2.1 Computer technology

Before 1994 almost all computer applications werelocalised to a single computer or to a small localcluster of computers. There were few applicationsdistributed over several sites. Now almost every com-puter application is distributed over several comput-ers at remote locations. The Internet makes it possiblefor all these computers to interact with one another.

If we define a computer to be a device containing aCPU, there were rather few devices that could becalled a computer in 1994 compared with the currentsituation. The number of CPUs in the world has atleased increased by a factor of 1000 since 1994. It isprojected that this number will increase by anotherfactor of 10 to 100 before the end of this decennium.There will then be between 10 and 100 trilliondevices containing a CPU – between 1000 and10,000 for each of us. And this number is likely toincrease steadily in the future – if not a major opera-tion failure of the same devices brings us back to theStone Age.

Before 1994 computers were the toys of the profes-sionals. Now almost everyone in the industrialisedworld has access to a computer. Most of us cannotdo a day’s worth of work without using a computerdirectly or indirectly. At home we activate the deviceto play games, pay bills, buy goods or search aim-lessly for information.

2.2 Telecommunications

Telecommunications has until recently been con-cerned with interactions between people. This is stillan important application of telecommunications.However, since 1994, an increasing proportion of thetraffic is taking place between people and computers.This is the application of the Internet. With thetremendous growth of the number of autonomousdevices containing CPUs, the majority of traffic inthe future will be between machines. All the trillionsof devices will be connected directly or indirectly tothe Internet. This revolutionises the whole conceptof telecommunications.

Figure 1 The extent of the paradigm shift

1994

2004

Computer technology:

• Non-distributed

• Localised

• Professional

Telecommunications:

• Person-to-person

• Telephony

• Stationary

Society:

• Non-critical ICT dependency

• Physical vulnerability

• Simple responsibility chain

Computer technology:

• Distributed

• Ubiquitous

• Common

Telecommunications:

• Machine-to-Machine

• Data communication

• Mobile

Society:

• Critical ICT dependency

• Logical vulnerability

• Complex responsibility chain

ISSN 0085-7130 © Telenor ASA 2004

Page 49: 100th Anniversary Issue: Perspectives in telecommunications

47Telektronikk 3.2004

Before 1994, most of the telecommunications trafficwas made up of telephone calls. Now most of thetraffic is data communications. This trend will con-tinue, requiring that the telecommunications opera-tors alter their business models in order to get rev-enues from the new traffic patterns. Before 1994, theidea was to merge data communication with tele-phony; now, telephony is merged with data commu-nication.

Telecommunications used to take place betweenstationary points. Now much of the traffic takes placebetween mobile terminals. This trend is only increas-ing and it is expected that most of the traffic willsoon be between mobile entities, most of these beingautonomous devices containing a CPU.

2.3 Society

Society has become critically dependent on the infor-mation and communication technology (ICT). Thiswas not the case ten years ago. Much of the remain-der of this paper is about this problem.

This has caused a change in the way in which weassess vulnerability and risk. Ten years ago our focuswas on natural disasters appearing randomly andintended damages caused by bombs and explosivesand spreading of pathogens, poisons and nuclearwaste. Now I am pleased to observe that the aware-ness that we may be hit by logical bombs, viruses,worms and other computer horrors causing even moredamage than their physical cousins is slowly diffus-ing into the minds of politicians and managers.

Except for events such as pandemics, meteorite hitsand worldwide war, most events causing us troubleare local and can be handled by a single organisation.Information attacks are different. They may hit all ofus independent of nationality, wealth, skin colour orreligious belief. The source may be a single computerrun by a single person just anywhere in the world. Tocounter such an attack requires complex, internationalcooperation and coordination not just between friendsbut also across traditional conflict zones. To myknowledge there is no real effort to establish world-wide cooperation against information warfare. Whatmay make such efforts fruitless is that informationwarfare may be part of the arsenal of weaponry acountry has at its disposal – a fact you will not dis-close to your enemies. This is even so on the smallscale. Companies badly hit by data crime do notreport such events because it may reduce their reputa-tion in the marketplace. This type of irrationalityseems to be seated in the deep abysses of our minds:lice are spread because people do not report suchincidents in fear of being labelled unhygienic.

3 Emergence of a production factor:ICT and society

The four classical production factors are land or natu-ral resources, labour including skills, capital in termsof assets, and entrepreneurship or the ability to ex-ploit opportunities. These are the resources that areused in the process of production.

During the last ten years, ICT has changed the way inwhich goods are produced, marketed, distributed andsold. There are three aspects of this evolution: ICThas become raw material, ICT is included as a com-ponent in the product, and ICT is used as productiontool.

This evolution has made society critically dependenton ICT.

3.1 ICT as raw material

We have had products where ICT, in particulartelecommunications, has been used as raw materialfor a long time. These are products where ICT is oneof the components of the product, for example tele-vision broadcast services. However, the number ofsuch products has increased violently during the lastfew years. During the dot.com boom, it became evi-

Figure 2 Delivery of products by independentnetworks

Product 1

Product 2

Product

delivered

News

Telecom

Banking

Telecom

a) Actual delivery of product by cooperating networks

b) Perceived delivery of newspaper

c) Perceived delivery of electronic banking service

Product

delivered

Product

delivered

ISSN 0085-7130 © Telenor ASA 2004

Page 50: 100th Anniversary Issue: Perspectives in telecommunications

48 Telektronikk 3.2004

dent that the pricing of these products is rather trickyand that they behave differently in the marketplacethan other products. One reason is that it has beencustomary to regard ICT, in particular the telecom-munications component, as a separate product inde-pendently of the contexts in which it is used. Some-times the customer is not willing to pay for the prod-uct as a whole but only for the telecommunicationscomponent. This is so because the customer viewsthe product as a telecommunications product only.The electronic newspaper is a good example.

In other cases, ICT and other components of theproduct are paid for separately such as in electronicbanking services: the customer does not realise thatICT is an integral part the electronic banking service.

In both cases, the final product consists of compo-nents delivered in parallel by several providers. Thefinal product is in fact put together by the customer.The customer does not always realise that this is theway it is done. In the example of the newspaper, thecustomer views the product as a single entity deliv-ered by the telecommunications provider and is thuswilling to pay for this single service. In the bankingexample, the customer perceives the services receivedfrom the bank and the telecommunications provideras independent services and is therefore willing topay for them separately.

These principles are illustrated in Figure 2.

3.2 ICT as component of a product

The modern automobile has become a formidablecomputer platform containing numerous CPUs, mem-ory chips and communication devices. The computerplatform is just a part of the vehicle in the same wayas the tyres, steering wheel and crankshaft. The sameexample holds for aircraft, television sets, buildings,household appliances and so on. ICT is just a compo-nent of the final product.

3.3 ICT as production tool

Finally, ICT is used in almost all production of goodsand services. Examples are financial services, logis-tics, production of electric power, extraction and

refining of oil and production of food. Society grindsto a halt if the ICT infrastructure stops functioning.

ICT is then taking on roles similar to the classicalfactors of production. In the first case it is a resourcetaking on the same role as land. In the third case it isan asset taking on the role of capital. In general, ICTchanges the need for capital and labour.

Globalisation of the company and decentralisationof management and production are examples of thecapabilities ICT offers.

4 New mathematical insight:Random networks andscale-freeness

A graph is shown in Figure 3. The graph is a verysimple object but the mathematics of it is formidablycomplex. The mathematics of graphs is not the pur-pose of this paper. We will just be concerned withone particular application of graphs.

A network of any kind is a graph. Electricity grids,telecommunications networks, roads, river basins,sewage systems and freshwater supply systems areexamples of networks. Acquaintanceship among peo-ple, genealogical trees, scientific co-authorship, foodwebs, cell metabolism, webs of political influenceand industrial ownership, email, and the World WideWeb can be represented and analysed as graphs. Thisshifts the focus from analysing a number of differentphenomena separately to study them jointly in termsof a single abstract mathematical object.

The graph consists of vertices that may or may not beinterconnected by edges. Vertices that are connectedby an edge are said to be adjacent. There may bemore than one edge between adjacent vertices (notshown in Figure 3). The graph to the left in the figureis undirected. The graph to the right is directed: thereis an edge from one vertex to another as indicated bythe arrow but not in the opposite direction. A two-way interconnection requires two edges with arrowsin opposite directions as shown.

A path between two vertices is a contiguous string ofedges between them. The distance between two ver-tices is the shortest path between the vertices, and thediameter of the graph is the longest distance betweenany two vertices. A graph with isolated componentsor vertices such as the graph in the figure has infinitediameter. A graph is path-connected (or just con-nected) if a path exists between any two vertices.

The diameter of the acquaintanceship graph of everyliving human is about 6 (Milgram’s six degrees). TheFigure 3 A graph or network

Vertex

Edge

ISSN 0085-7130 © Telenor ASA 2004

Page 51: 100th Anniversary Issue: Perspectives in telecommunications

49Telektronikk 3.2004

diameter of the World Wide Web is about 19. Thegraphs with small diameters as compared to the num-ber of vertices are the small world graphs. Both thegraph of acquaintanceships and the Web are smallworld graphs.

The number of edges at a vertex is called the degreeof that vertex. For a directed graph we define in-degree and out-degree as the number of arrows point-ing into or out of the vertex, respectively. Vertex,edge, diameter and degree are about the only termi-nology we need from graph theory in order to under-stand what follows.

We will be concerned with a particular type of graphcalled random graph. Random graphs may be gener-ated using various algorithms. The following twoalgorithms are particularly easy to understand, andone of these algorithms provides us with a type ofgraph that is particularly important in assessing thevulnerability of the society.

The theory of random graphs is rather new. The firstcomprehensive work in the area is that of Erdös andRényi (E-R) from 1959–60. Then it took almost 40years before the next radically new step was taken:

the discovery of the structure of scale-free graphs.The mathematical foundation of the theory for scale-free graphs was developed mainly by the physicistsBarabási and Albert and the mathematicians Bollobásand Riordan during the last two or three years.

The E-R graph can be generated in the followingway. In the starting position, the inventory of verticesof the graph is given. If the graph contains 101 ver-tices, these vertices can be interconnected in pairs byat most 5050 edges. Select a pair of vertices and tossan unfair coin that has probabilities 0.02 and 0.98 forshowing heads and tails, respectively. If heads showsup, then draw an edge between the vertices. Continuethis process for all 5050 possible pairs of vertices. Agraph generated in this way then contains on average101 edges. Since each edge contributes to the edgedegree of two vertices, the average edge degree ofthis graph is 2 for all vertices in the graph.

For different probabilities on the unfair coin, E-Rgraphs with different properties can be drawn: graphswhich are fragmented into isolated components,graphs which contain one giant component, graphswhich contain a certain pattern, graphs that are con-nected and so on.

Figure 4 Growing a graph

ISSN 0085-7130 © Telenor ASA 2004

Page 52: 100th Anniversary Issue: Perspectives in telecommunications

50 Telektronikk 3.2004

A large E-R graph is featureless in the sense that itlooks the same in every section of the graph. Selectany two subsets of vertices from anywhere in thegraph. These subsets will look the same from a statis-tical viewpoint.

The other type of graph can be generated by a growthprocess first suggested by Barabási and later refinedby Bollobás and others. The process is illustrated inFigure 4 where a graph is grown for 15 generations.

The algorithm is as follows. At each generation adda new vertex (red in the figure) and connect it to twoexisting vertices. The probability that there will bean edge between the new vertex and an existing oneis (say) proportional to the degree of that vertex. Inthe fourth generation of the graph process (see thefigure), there are two vertices with degree 3 and twovertices with degree 2. The probability is then 1.5times larger that the new vertex added in the fifthgeneration will be connected to a vertex with degree3 than to a vertex with degree 2.

This process results in an uneven growth patternwhere the degree of some vertices increases fasterthan that of other vertices. A few vertices will thenhave a much larger degree than all other vertices.These vertices are called hubs (blue in the figure).

The edge degrees of a large graph grown in this (orsimilar) way will be distributed according to the

power law P(k) ~ k-γ where P(k) is the probabilitythat the vertex has degree k and γ is a constant usuallybetween 1 and 3. The graph is said to be scale-free.Observe that if γ > 1, then the average degree of alarge scale-free graph approaches zero. In contrast,the degree of all vertices of an E-R graph is dis-tributed in accordance with a Poisson distributionaround the average degree of the graph. Almost allvertices in the E-R graph will then have a degree veryclose to the average, and the number of vertices withdegrees different from the average decreases expo-nentially with the distance from the average. The dif-ference between the statistical distribution of the edgedegrees of E-R graphs and scale-free graphs is shownin Figure 5.

The difference between the two types of statisticaldistributions is striking. The phenomena resulting inE-R graphs and scale-free graphs are just as different.

Most natural graphs are not strictly scale-free; forexample, the log-log graph may be slightly curved orcertain edge degrees may not exist at all. It is thenmore fair to say that the graph is multi-scaled forwhich the power law distribution is a reasonableapproximation enabling us to draw important conclu-sions and gain new insight. In analogy, it is ratheruninteresting to calculate the orbit a particular leaffollows when it falls to the ground: it is more enlight-ening to understand which forces make it fall to the

Figure 5 Distribution of edge degrees of E-R graphsand scale-free graphs

Edge degree

Nu

mb

er

of

ve

rtic

es

a) Single-scale distribution

(E-R graph)

Most typical node

(average degree)

Nu

mb

er

of

ve

rtic

es

log

(nu

mb

er

of

ve

rtic

es)

Log(edge degree)

Edge degree

b) Scale-free distribution

ISSN 0085-7130 © Telenor ASA 2004

Page 53: 100th Anniversary Issue: Perspectives in telecommunications

51Telektronikk 3.2004

ground in the first place and which forces prevent itfrom following in a straight line.

The growth process in the example above is just oneof several ways in which a random scale-free graphmay evolve. Phenomena that may cause scale-free-ness are still being investigated.

Scale-free graphs also share several properties withfractals and critical phenomena in chaotic systemssuch as percolation and phase transitions in thermo-dynamics. This offers access to a wealth of mathe-matical methods and results from other fields ofmathematics and other mathematical sciences. Fur-thermore, it has recently become apparent that scale-free graphs are essential for understanding severalaspects of biological and social systems, psychology,politics, business management and linguistics. Thisopens for a type of cross-disciplinary insight we havenever mastered before.

5 You cannot have your cake and eatit: Robustness and vulnerability

Figure 6 is an example of a small scale-free graphcontaining eight hubs (in red). This graph is used toillustrate the robustness and vulnerability of scale-free graphs. The graph is small but suffices to illus-trate the most important conclusions.

Albert et al. define robustness of a graph in terms ofhow much the average distance between vertices inthe graph increase if a certain number of vertices areremoved randomly from it. If vertices are removedfrom an E-R graph, the average distance increasesevenly. On the other hand, the average distance in-creases only slowly if vertices are removed from ascale-free graph.

Figure 7 shows the effect on the graph in Figure 6 ifnine random vertices, corresponding to 25 % of thevertices, are removed. In the process even two hubswere removed. The average distance has increasedfrom approximately 2 to approximately 3. If the twohubs had not been removed, the increase in the aver-age distance would hardly be measurable. The gen-eral observation is that scale-free graphs are veryrobust against random attacks: remove a few billionpages from the Web and hardly anyone will recogniseit; remove a few thousand routers from the Internetand the traffic will continue to flow unhindered.

If the attack is directed against the hubs, the graphdisintegrates into isolated components as shown inFigure 8. If terrorists want to take down the Web,then the best they can do is to take out the largesthubs, namely the search engines. The Web then dis-

Figure 6 Example of scale-free graph

Figure 7 Removal of random nodes

Figure 8 Removal of hubs

ISSN 0085-7130 © Telenor ASA 2004

Page 54: 100th Anniversary Issue: Perspectives in telecommunications

52 Telektronikk 3.2004

integrates into myriads of disconnected islands ofweb pages that are impossible to find. Similarly, findthe location of particularly large routers and some ofthe centralised functions in the Internet such as inter-connect management servers and name servers andtake them out. The traffic handling capability of theInternet then plummets rendering the network use-less. Taking out a number of random routers usingexplosives has no effect at all on the network at large.This is only a nuisance and an expense for the ownerof the building in which the router is placed.

If emails containing viruses are filtered out by organi-sations having many large address lists, the spreadingof the virus is hampered. The dissemination of theSoBig virus was slowed down in this way.

These simple examples illustrate what can be provedby rigorous mathematics, namely that scale-freegraphs are very robust against random attacks butextremely vulnerable to directed attacks. The latterexample also shows that it may be possible to protectthe network against directed attacks if the structure ofthe network is properly understood.

The robustness is also a clue to the understanding ofother phenomena.

Nature acts blindly. An error in the chemistry of acell caused by, say, radiation is thus a random attackon the metabolic network of the cell. In the metabolicnetwork, the vertices are chemical substances and anedge between two substances indicates that they takepart in the same chemical reaction. This network isscale-free and is thus robust against random changes.This robustness makes organisms survive even in themost hostile environments where mutations take

place frequently. Neural networks are scale-free. Thisensures that the brain is working perfectly well evenafter millions of neurons have died. Food webs arealso scale-free: removing species at random from anecological system usually does not cause total col-lapse of the system but removing just a single speciesmay sometimes destroy the ecology. In the lattercase, the removed species was a hub in the web.

It is not surprising that natural selection favoursscale-freeness since such graphs are not likely to bedestroyed by random incidents.

The problem with the Internet, the email service andthe Web is that they are not attacked by someone likeNature acting blindly but by someone who can iden-tify and take out just those few functions that causethe whole system to disintegrate. This and the ex-tended role of telecommunications described aboveare the two factors that cause a dramatic change inthe vulnerability of society.

6 Telecommunications as a hub ofsociety

As shown in Figure 9, all activities taking place insociety can be visualised as a directed graph. The ver-tices represent the activities that keep up the hustleand bustle of society; a directed edge indicates thatthe activity the arrow points at cannot be completedwithout the aid of the activity at the root of the arrow.The edges of the graph may even be weighted in vari-ous ways where the weights associated with an edgemay indicate, for example, how strong the depen-dency is. Similarly, a separate graph may be drawnfor every single activity, including service provisionand industry, just in order to study how this activitydepends on other activities.

Altogether it is possible to describe society in numer-ous ways using graphs, focusing on particular sets ofactivities, variables and dependencies. The discoveryof the properties of scale-free graphs may stimulatesuch research since it is possible to draw importantconclusions by just observing the structure of thegraph.

The complete graph structure of society is immeasur-ably complex. Even the graph of a single industry isimmensely complicated. However, many of thegraphs of society are likely to resemble scale-freegraphs. This means that we may use new insight toassess how robust and vulnerable society is to differ-ent types of incidents. One example in epidemiologyis that random vaccination of people against AIDSdoes not slow down the spreading of the disease. Thisis so because AIDS spreads in a scale-free graph. The

ICT

Electricity

Roads

Health

Water supply

Car industry

Oil refinery

Iron mine

Stock exchange

Logistics

Figure 9 Society as a graph

ISSN 0085-7130 © Telenor ASA 2004

Page 55: 100th Anniversary Issue: Perspectives in telecommunications

53Telektronikk 3.2004

only efficient method to bring an end to the pandemicis to identify who may be the hubs in this graph andvaccinate them. This is not just efficient and cheapbut it creates ethical dilemmas since one core issueof identifying the hubs is to pick out promiscuoushomosexual mails. On the other hand, random vacci-nation is democratic and free from ethical dilemmas,and it may give a haunted population the impressionthat something is done in order to help them.

Figure 10 shows one particular graph of society. Thegraph is three-partite and consists of a single vertex(telecommunications) at the upper layer, a number ofindependent clusters of electric power suppliers at themiddle layer, and the rest of society at the bottomlayer. The directed arrows indicate that every activityof society depends on electric power and telecommu-nications.

There is only one single telecommunications systemreaching into terminals and computers everywhere onthe globe. Every other organisation or societal activ-ity, except primitive agriculture and fishery, cannotfunction without telecommunications. Telecommuni-cations is thus a single hub that every other activitydepends upon: taking down telecommunications isthe same as taking down society. In principle, it suf-fices to launch a single attack against telecommunica-tions in order to take it down globally, though it maybe extremely hard to figure out how this can be done.However, what is important is not to analyse how thetelecommunications system itself can be destroyedbut how this system can be used to destroy othersystems.

The electric power system is also a hub, or ratherseveral hubs of society. Electric power systems areregional. This means that if there is a problem in oneregion, the other regions are not influenced by thisproblem. In order to take down the power system it isnecessary to launch separate attacks against eachregion. However, several incidents lately have shownthat the regional systems are vulnerable not only todirected attacks but to any type of attack because thepower grid is sensitive to cascading failures: if it failsat one point, overload may cause secondary failuresin other parts of the system. The vulnerability of thepower grid is thus different from that of telecommu-nications.

It is well known in epidemiology that a disease usu-ally starts to spread only if a sufficiently large popu-lation is infected. The threshold depends on how viru-lent the disease is, how it spreads from person to per-son, and the density and mobility of the population.Usually, the initial phase of the epidemic developsvery slowly. This is a threshold effect found in all

E-R graphs. One exception is AIDS: the Euro-Ameri-can epidemic spread essentially from one person.

One of the early findings concerning scale-freegraphs is that these graphs possess no threshold. Thisexplains the onset of the AIDS epidemic. Further-more, we know from experience that a data virus isusually spread from a single source. That it takes thevirus only a few hours to spread over the whole net-work is owing to the scale-free nature of Internet.

Therefore, the importance of telecommunications as ahub lies thus in the fact that it can be used to dissemi-nate efficiently data viruses that destroy the comput-ers connected to the network. These are the comput-ers we use in order to run and manage our society. Ifthe data virus is the bullet, the telecommunicationsnetwork is the gun. So instead of taking down societyby destroying the telecommunications network it isfar more effective, and probably much simpler, todamage society by utilising the ubiquity and thescale-freeness of the telecommunications network tolaunch an attack against the computers connected tothis network.

7 Stand up and fight!During the last ten years we have made society criti-cally dependent on computers. Everything we do nowis done much more efficiently and cheaper thanbefore; we can do things that were impossible tenyears ago; and we have transformed our way of liv-ing. The price tag is that we have made society vul-nerable to attack by virtually anyone.

We have seen that the source of the increased vulner-ability is that the ICT infrastructure has the shape ofscale-free graphs. Furthermore, we may expect that

Electric power

systems

Telecommunications

Industries and infrastructures

Regions

Figure 10 The hubs of society

ISSN 0085-7130 © Telenor ASA 2004

Page 56: 100th Anniversary Issue: Perspectives in telecommunications

54 Telektronikk 3.2004

many of the dependency graphs of society also arescale-free. These trivial observations indicate that wecannot assess the vulnerability of society unless weunderstand the properties of random scale-freegraphs. It is only recently that we have developedthe mathematical tools that enable us to study theseaspects of society.

Intuitively, it can also be inferred that protectingscale-free networks require different countermeasurescompared to those required for protecting networksand systems not having this property. The way inwhich we historically have organised our defencesis not based on such knowledge and is likely to beineffective against the new threats. These threats didnot exist ten years ago, and their mere existence wererecognised just a couple of years ago by groupsremote from the power hierarchies of society. Therehas simply not been enough time for the authoritiesto recognise the full impact of the new threats.

Scale-freeness is not just a threat. The phenomenonexists both in nature and society because it offersmany benefits, in particular, protection against thecaprices of random processes. The menagerie of ben-efits is still being explored. These studies may evenprovide us with tools that can be used as efficientcountermeasures against deliberate destructions.

BibliographyAlbert, R, Barabási, A-L. Statistical mechanics ofcomplex networks. Reviews of Modern Physics, 74,2002.

Albert, R, Jeong, H, Barabási, A-L. Error and attacktolerance of complex networks. Nature, 406, 2000.

Audestad, J A. Challenges in telecommunications.Telektronikk, 98 (2/3), 159–182, 2002.

Barabási, A-L. Linked: The New Science of Networks.Perseus Publishing, 2002. This is a popular introduc-tion to random graph theory requiring no knowledgeof mathematics.

Bollobás, B. Random Graphs, 2nd edition. Cam-bridge University Press, 2001. (The theory of E-Rgraphs)

Bollobás, B, Riordan, O. Robustness and vulnera-bility of scale-free random graphs. Internet Mathe-matics, 1 (1), 2003.

Complexity, 8 (1), Special issue: Networks and Com-plexity. September/October 2002.

Godoe, H. Innovation regimes, R&D and radicalinnovations in telecommunications. Research Policy,29, 2000.

Mandelbrot, B. The Fractal Geometry of Nature.Freeman, 1983.

Milgram, S. The small world problem. PhysiologyToday, 2, 1967.

Spencer, J. The Strange Logic of Random Graphs.Springer, 2001.

Stabell, C B, Fjeldstad, Ø D. Configuring value forcompetitive advantage: On chains, shops, and net-works. Strategic Management Journal, 19, 1998.

Jan A Audestad (62) is Senior Adviser in Telenor. He is also Adjunct Professor of telematics at the Norwe-

gian University of Science and Technology (NTNU), and he holds a professorship in informatiton security at

Gjøvik University College, where his main task has been to build up a new master degree in information

security, sponsored, amongst others, by Telenor. He has a Master degree in theoretical physics from NTNU

in 1965. He joined Telenor in 1971 after four years in the electronics industry. 1971 – 1995 he did research

primarily in satellite systems, mobile systems, intelligent networks and information security. Since 1995 he

has worked in the area of business strategy. He has chaired a number of international working groups and

research projects standardising and developing maritime satellite systems, GSM and intelligent networks.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 57: 100th Anniversary Issue: Perspectives in telecommunications

55

1 IntroductionPredicting the research focus and market develop-ment of a field that changes as quickly as communi-cation technology, in particular in the wireless field,is not a task that can ever be executed with 100 %accuracy. Looking 15 years into the past, many oftoday’s most important and penetrating communica-tion technologies (e.g. GSM, WWW) were not yet inpublic use. Taking into account the accelerating paceof improvements and cost reductions with regard toenabling technologies, as well as the ever-increasingimpact that communication technology has on thehuman lifestyle, it is likely that the future may holdeven greater surprises than did the past. However,some basic trends are emerging.

Many different groups and organisations have madepredictions and foresights of future communicationtechnological developments; see, for instance, theCELTIC initiative [CEL1]. With regard specificallyto wireless communications, the book [KAR03] dis-cusses possible scenarios for the “mobile world” in2015. Four different possible scenarios are suggested,differing widely with respect to economic, political,and technological developments. In this paper we willnot assume any particular scenario in the economicand political sense, since this necessarily will dependon several factors outside our field of competence.Rather we will focus on the more “objective” truthsof technology development, and discuss certain basictechnological aspects of how a technologically soundand efficient future wireless/mobile communicationsystem should be designed. For the most part, withone exception, we will restrict ourselves to what isusually called “lower-layer” technological challenges(referring to the OSI protocol layer stack).

More specifically, our goal is to provide an overviewover some of the radio (“air”) interface and accesstechnologies we believe will be of greatest impor-

tance and have the greatest promise as basic enablingtechnologies in future wireless and mobile systems.We will not go deeply into technical details; rather,we will try to highlight some of the major technologytrends and buzzwords and what the advantages ofthese technologies are. Since the scope of necessity isquite broad and space is limited, we can only scratchthe surface of the various topics – as well as provid-ing a diverse list of state-of-the-art, up-to-date refer-ences for further reading.

2 Air interfaces and radio access– an overview

As defined in the present paper, the “air interface”and “radio access” part of a communication systemcan be thought of as residing between the mobile orwireless terminals and the core network, and mainlyencompassing technology components for:

• Channel (i.e. error control) coding/decoding• Modulation/demodulation/detection• Synchronization• Channel estimation• Equalization• Interference management• Multiuser access and scheduling

Note that some of these operations may, dependingon the technology used, be performed in a jointand/or iterative manner, e.g. coded modulation, jointestimation/detection, iterative multiuser decoding/detection, etc.

It is not our intention to provide an exhaustive over-view of all possible candidate technologies for imple-menting the air interfaces/access technologies infuture mobile and wireless systems; rather, we willconcentrate on a select few technology componentsand principles that we believe have the greatest

Radio interface and access technologies forwireless and mobile systems beyond 3G

G E I R E . Ø I E N

Geir E. Øien is

Professor of

Information

Theory at NTNU,

Trondheim

Telektronikk 3.2004

In the present paper we describe some of the radio interface and access technologies and principles

that we believe will become most important for future wireless and mobile communication systems

beyond 3G (B3G). We cover multiple-input multiple-output (MIMO) technology, link adaptation

techniques, multi-carrier modulation (OFDM), iterative (“turbo”) receiver processing, and cross-layer

design methodologies. We seek to highlight the main advantages (and, if any, disadvantages and

limitations) of each component technology described, with minimal use of mathematics. The intention

is to give an overview without going too deeply into technical details; however, an extensive and

up-to-date reference list is provided so the interested reader can know where to go for more

information on a particular topic.

ISSN 0085-7130 © Telenor ASA 2004

Page 58: 100th Anniversary Issue: Perspectives in telecommunications

56 Telektronikk 3.2004

potential in the realization of spectrally efficient,reliable, flexible broadband multimedia wireless andmobile communications. Before going into the tech-nologies specifically, we will however briefly reviewthe currently more or less broadly accepted picture ofhow future wireless and mobile communications sys-tems will most probably develop.

3 Wireless and mobilecommunications beyond 3G:An overview of prevailing trends

In the future, wireless communications will becomealmost ubiquitous, and the ability to communicateseamlessly, nomadically, and with increased mobilitywill be ever more important. Relevant perspectiveson the development of so-called 4th generation (4G)– or “Beyond 3rd Generation” (B3G) – wireless tech-nology are described in [WWRF, 4GMF].

Also, the International Telecommunications Union(ITU) has developed a Recommendation – ITU-RM.1645 – for B3G [ITU1], and a corresponding listof key research questions [ITU2] providing guidancefor researchers wishing to contribute to B3G develop-ment. The recommendation reads as follows (IMT-2000 being yet another way of saying “3G”): “Sys-tems beyond IMT-2000 will be realized by functionalfusion of existing, enhanced and newly developed ele-ments of IMT-2000, nomadic wireless access systemsand other wireless systems, with high commonalityand seamless interworking.”

The research goals as established by the ITU for thenovel B3G radio interfaces indicate ubiquitous cover-age with target (downlink) data rates in the range of1 Gb/s at low mobilities and 100 Mb/s at high mobili-ties. A non-exhaustive list of the key research ques-tions considered by the ITU include (focusing on thepoints most relevant for air interface and radio accessdevelopment) [ITU2]:

• Would there be any significant constraints inachieving the target data rates, consistent with thetimelines as discussed in the RecommendationITU-R M.1645, related to terminal speed, or to thepower consumption and the consequences for thebattery in the terminal?

• What would be the RF channel bandwidth(s)?

• What spectrum efficiency can be achieved?

• If the target bit rate for the uplink of the radio inter-face was lower than for the downlink, how wouldthis affect its characteristics?

• To what extent could a common or adaptable radiointerface be capable of meeting the goals for bothhigh mobility such as mobile access and lowmobility such as nomadic/local wireless access?

• To what extent might the evolution of IMT-2000and other existing radio interfaces meet thesegoals?

• What technology developments in the radio accessnetwork might reduce the need for additional spec-trum for systems beyond IMT-2000?

• What are the implications of different technologies(antenna concepts, physical layer concepts, higherprotocol layers, etc.) and system architectures (suchas cellular type deployments, multihop systembased deployments, etc.) on the feasibility of thedata rate targets, the range, the system capacity etc.?

• What is the impact of different frequency rangesand mobility targets on the feasibility of the datarate targets?

• What is the spectrum demand for initial coveragein the deployment area for different system archi-tectures and technology advances?

• What would be the maximum size of a cell (or thecoverage per fixed node)?

It is clear from this list that there are still a lot of openand nontrivial questions related to the development ofradio interfaces/access techniques for novel wirelesscommunication systems. However, it seems clear thatB3G will ultimately represent a convergence betweenfixed wireless access, wireless mobile, wirelessLANs, and packet-division-multiplexed networks[4GMF]. One integrated terminal with one globalpersonal number will, in time, most probably be ableto access freely any wireless air interface.

The radio transmission modules will become fullysoftware-definable, re-configurable, and program-mable (often referred to as software radio or cogni-tive radio). The network will be heterogeneous withregard to available and interacting technologies, con-tent, and services; re-configurable and adaptive; withdifferent sub-networks interconnected and with dif-ferent capacity and bandwidth requirements, channelcharacteristics, transmission technologies, processingcapabilities, desired and available services, and sub-network architectures at the different levels.

The technologies, subsystems and bandwidths in-volved will depend on communication distance andwill include Personal Area Networks (PANs) over

ISSN 0085-7130 © Telenor ASA 2004

Page 59: 100th Anniversary Issue: Perspectives in telecommunications

57Telektronikk 3.2004

ultra-short distances (< 10 m), broadband WLANtechnology within small cells (< 100 m), a backbonecellular system for communication over mediumranges (< 1 km), all the way up to satellite links forproviding ubiquitous access to more basic servicesin remote and sparsely populated areas.

One of the most important issues governing futuredevelopment of wireless technologies, services, andbusiness models is the regulation of the availableradio spectrum. In the US, there is currently a movetowards regulatory reforms that may eventually freeup enough radio frequency (RF) bandwidth to signifi-cantly influence the development of mobile telephonyand wireless Internet services [SPEC]. As an exam-ple, 85 MHz bandwidth previously allocated for ana-logue UHF broadcasting is being released for use infuture mobile communications services. Digital ter-restrial broadcasting (DVB-T) exploits the bandwidthfar more efficiently, facilitating both more TV pro-grams and extra bandwidth for other services. Fre-quency reallocation, spectrum leases, and spectrumsharing will allow the use of several new frequencybands for future mobile communications. These fre-quencies will find different uses depending on rangeand capacity needs, with the highest frequenciesbeing reserved for high-capacity short-range commu-nications.

To sum up, the currently prevailing paradigm for theimplementation of B3G is that the core network willevolve toward an IP-based network, serving a wire-less Internet radio access based on packet switchingfor all services, including voice [NEW]. The majortrend will be a move towards higher frequencies(above 5 GHz), leading to a nano- or pico-cell struc-ture which will make it difficult, if not impossible, todesign the network and provide continent-wide cov-erage on the basis of the standard cellular conceptknown from e.g. GSM. Rather, the network willevolve towards a structure of multi-layered ad hocnetworks, not one rigid network structure. In such astructure base stations may be installed where theyare needed, and connected to each other in a self-con-figuring way, to transfer IP traffic similarly to presentInternet wired architecture [NEW].

In such a heterogeneous network architecture dis-tributed high-speed WLANs will serve local hotspots, seamlessly inter-connected by the overlayedbackbone cellular network and by a wired infrastruc-ture. A multitude of wireless sensors will also beembedded and integrated in the network for dynamicand intelligent interaction and communication be-tween users and devices, as well as directly betweendevices.

It is clear that there will be a strong demand on B3Gtechnology to deliver much higher data rates andmore diverse services than 2G and 3G systems. Thedesign of suitable radio interfaces has to take this intoaccount; the dominant traffic load will be high-speedburst-type traffic. This is a great challenge for allexisting radio interface technologies, and noveldevelopments must be made. In the next section wewill focus on describing the air interface technologycomponents that we believe will become central inB3G radio interfaces. Ultimately these technologiesmay enable a potential increase in bandwidth effi-ciency by a factor of 10 to 100 compared to today’swireless systems like GSM and UMTS.

4 Some important air interface andradio access technologies for B3G

4.1 MIMO technology

Multiple-Input Multiple-Output (MIMO) technologymight be the single most important factor for en-abling dramatically increased capacity and link relia-bility in future wireless and mobile communicationssystems, as well as for improving multi-user trans-mission, detection and interference cancellation. Ablock diagram of a generic MIMO system is shownin Figure 1.

As seen from the figure, the key feature of a MIMOsystem is the use of multiple transmit and receiveantenna elements (antenna arrays). MIMO can thusbe viewed as an extension of the original concept of“smart antennas” (where multiple antenna elementsare used on either the transmitter or receiver side).For readers interested in exploring technical detailsbeyond what is given in the present paper, an excel-lent tutorial on MIMO systems can be found in aprevious issue of Telektronikk [G&A02].

It was first demonstrated by Foschini and Gans in1998 [F&G98] that dramatic capacity increases mightbe enabled on wireless links by equipping both thetransmitter and the receiver with multiple antennas,as shown in Figure 1. By so-called spatial multiplex-

Figure 1 Generic MIMO system with transmitter (Tx), MxN channelmatrix (H) containing fading coefficients of all spatial subchannels,and receiver (Rx)

Channel

matrix H

1

2

M

1

2

N

Tx Rx

ISSN 0085-7130 © Telenor ASA 2004

Page 60: 100th Anniversary Issue: Perspectives in telecommunications

58 Telektronikk 3.2004

ing, such MIMO systems can enable the same band-width to be reused many times within the same cell.In effect, intelligent space-time signal processing isthen used to open up several parallel “virtual datapipes” between a transmitter and receiver within thesame bandwidth. Each of these data pipes will havethe same channel characteristics as an ordinarysingle-input single-output (SISO) wireless fadingchannel. Independent data streams can be transmittedthrough each of these virtual SISO pipes, and theresulting capacity is then the sum of the capacitiesof the individual data pipes. This is called the spatialmultiplexing gain.

The remarkable thing is that this effect comesbecause of, not despite of, the multipath fading prop-erties of a typical wireless channel. Fading, whichtraditionally has been considered as an impairment –not an asset – of the channel, is actually exactly whatenables MIMO systems with spatial multiplexing toachieve their remarkable spectral efficiencies. Thismay seem counter-intuitive at first, but is a simpleconsequence of the fact that the MIMO channel canbe decomposed, by appropriate space-time processingtechniques, into a set of parallel data pipes. The richerthe scattering environment (i.e. the more “severe” themultipath fading), the more such independent pipescan be created.

Another possibility is to use the available degrees offreedom provided by the multiple antennas to designprocessing schemes to stabilize the quality of thewireless link (mitigate fading) and thus to achievemore reliable communications (lower error rate) fora given average throughput. This is called the spatial

diversity gain, and systems exploiting all theirdegrees of freedom for maximum spatial diversitygain are often referred to as MIMO diversity systems.An illustration of how spatial diversity mitigates fad-ing and stabilizes the channel quality is given in Fig-ure 2. Here, the black and green lines are fading lev-els of two independently fading channels whereas thered line is the fading level after optimal diversitycombining of the two independent channels. Thusthe diversity is of order 2 (i.e. 2 receive antennas areused). It is clearly seen that the combined channelexhibits less severe fading (fewer deep fades) thaneach of the two individual channels.

Trade-offs between the two extreme possibilities ofspatial multiplexing and diversity combining, whichseek to exploit partially the multiplexing gain andpartially the diversity gain, are also possible.

Assuming a MIMO system with M transmit antennasand N receive antennas available, a total of MN dif-ferent fading radio channels (“subchannels”) are setup between the transmitter and receiver. In the origi-nal MIMO theory developed by Foschini and Gansand subsequently used and refined by a host ofresearchers worldwide, these subchannels were usu-ally assumed to be mutually uncorrelated. This isusually a good assumption if the antenna array ele-ments are placed close to half a carrier wavelengthapart. The assumption provides an upper bound onthe achievable MIMO spatial multiplexing or diver-sity gain, since all subchannels in this case fade inde-pendently of each other. However, the assumption ofuncorrelated subchannels is far from true in all practi-cal cases, e.g. when the antenna elements are placedtoo close together (as might be the result on a small-sized terminal with MIMO capability). If there isstrong spatial correlation the individual subchannelsare more prone to fade “synchronously,” and subse-quently the potential gain from combining them isreduced.

To explain how the gain of spatial multiplexingappears, in a straightforward way, let us now assumethat all the MN subchannels in an MxN MIMO systemare indeed uncorrelated. Furthermore, we assume thatthe multipath fading on these channels has stationarystatistical properties which are also subchannel-inde-pendent. At first glance, and assuming that it is possi-ble to resolve all the MN paths by means of someproper space-time processing scheme, one mightat first think that the capacity can potentially be in-creased by a factor of MN compared to a traditionalsingle-input single-output (SISO) system. However,this is not the case since in a power-limited system,as is usually assumed, the total transmit power P [W]of course has to be shared between the M transmit

Figure 2 Illustration of spatial diversity gain (courtesy of prof. DavidGesbert)

0 100 200 300 400 500

-90

-95

-100

-105

-110

-115

-120

-125

Sig

na

l le

ve

l in

dB

Time in milliseconds

ISSN 0085-7130 © Telenor ASA 2004

Page 61: 100th Anniversary Issue: Perspectives in telecommunications

59Telektronikk 3.2004

antennas. Therefore, the average available powerper transmit antenna element is only P/M [W].

Foschini and Gans [F&G98] showed, assumingRayleigh fading statistics and Additive White Gaus-sian Noise (AWGN) on each subchannel, that theergodic channel capacity or Shannon capacity (whichfor our purposes can be thought of as capacity aver-aged over time in a mobile system) of such a MIMOsystem grows linearly with the smallest number of Mand N, when both M and N are large. Mathematically,we may then write the approximate ergodic capacityas

C = Klog2(1 + ρ) [bits/s/Hz],

where K = min(M,N) and ρ is the received signal-to-noise ratio (SNR) at one of the receiving antennas.The potential capacity increase of a MIMO over aSISO system is illustrated in Figure 3 for M = N = 4.

The above ergodic Shannon capacity is an asymptoticand theoretical limit on the average amount of infor-mation that can be transmitted reliably (with an arbi-trarily chosen maximal error rate) through the MIMOchannel over time. In practice we must of course usesome transmission scheme that is implementable withfinite complexity and delay, which will limit the prac-tical capacity. For this purpose, so-called space-timecodes have been developed. In particular, space-timeblock coding (STBC) has become a popular tech-nique. Space-time codes can be designed either toexploit the spatial multiplexing gain [Fos96] or thespatial diversity gain [Ala98,Tar99].

The idealized MIMO theory presented above has alsonow been extended to include spatially correlatedchannels. It has been shown that even when quitehigh spatial correlation is present between the sub-channels, a significant spatial multiplexing gain(which translates to potential rate increase) and/orspatial diversity gain (which translates to more stablechannel quality) can still often be achieved. Thismeans that MIMO technology holds a great deal ofpromise for a wide range of practically importantscenarios, and not only in the idealized case.

4.2 Link adaptation techniques

Link adaptation is a prerequisite for optimal capacityexploitation in temporally and spatially varying radiochannels, both in the single-link and multi-user (net-work) case. The principle is to adapt throughput(transmit spectral efficiency) and power on a linkdynamically to predicted channel conditions, to maxi-mize the ergodic channel capacity while adhering tosome given design quality constraints. In this wayone avoids the “worst-case” choice of transmissionschemes that must be made in non-adaptive systems,which therefore typically only can exploit a fractionof the theoretically available capacity.

In an adaptive transmission scheme, the receiverestimates the channel state, and the obtained channelstate information (CSI) is fed back to the transmittervia a feedback (also called return) channel. Typicallythe CSI fed back will be an index indicating in whichsub-range (out of a finite number of possible sub-ranges) the instantaneous channel-signal-to-noiseratio (CSNR) resides.

The transmitter subsequently uses the CSI to updatethe transmission parameters such as modulation con-stellation, error-control code, and/or transmit powerlevel. This principle is illustrated in Figure 4. Here,n is the index of the transmitter mode (possible com-

Figure 3 The ergodic capacity (in bits/s/Hz), as afunction of the channel signal-to-interference+noiseratio in dB, for a 4x4 MIMO system (red) and a SISOsystem (blue) (Courtesy of prof. David Gesbert)

Figure 4 Block diagram of a link adaptation scheme where the channel codes and modulation constellationsare adaptive

4x4

1x1

30

0

10 dB 20 dB

BP

S/H

z

SINR

Information bitsAdaptive

encoder/

modulator

Frequency-flat

fading channel

Adaptive

decoding/

demodulation

Decoded

information bits

Channel state information, n

ISSN 0085-7130 © Telenor ASA 2004

Page 62: 100th Anniversary Issue: Perspectives in telecommunications

60 Telektronikk 3.2004

bination of channel code and modulation constella-tion) to be used, corresponding to the channel beingin a state which corresponds to the nth sub-range ofsignal-to-noise ratio levels.

The choice of transmitter settings can for example bedone subject to fulfilment of a chosen target bit-errorrate (BER) and an average transmit power constraint,so that the transmitter attempts to transmit informa-tion at the highest instantaneous spectral efficiencythat is possible, while simultaneously fulfilling theBER constraint and the power constraint. Thus, fora high CSNR the transmitter will choose its settingscorresponding to a high spectral efficiency, whereasit will decrease the spectral efficiency dynamically asthe channel quality deteriorates. Other design criteriaare also possible, depending on the application athand.

The idea of link adaptation dates back to the early70s, when Murakami and Nakagami suggested usingsuch an idea to combat fading on a deep-space satel-lite-to-earth channel, by using the earth-to-satellitechannel as a feedback channel [M&N71]. In the early1990s Steele, Webb, and Hanzo were suggestingvarying the modulation constellation size accordingto the quality of the channel [S&W91, WHS91].

In 1997, link adaptation was put on a firmer infor-mation-theoretic grounding when Goldsmith andVaraiya showed that the ergodic capacity of a fadingchannel could be attained by a certain adaptivescheme in which transmit rate and power were con-tinuously and instantaneously adapted to the channel

conditions if these conditions were assumed perfectlyknown at the transmitter side [G&V97]. Practical linkadaptation schemes, typically in the form of adaptivecoded modulation [G&C98, L&M98, GOE99,HHØ00, V&G00], constitute discrete-rate, discrete-power approximations to this idealized and practi-cally nonrealizable capacity-achieving scheme. Anillustration of the capacities that typically can beachieved in practice is given in Figure 5. Here, theparameter m refers to the use of a Nakagami-m chan-nel model. m = 1 corresponds to the well-knownRayleigh fading channel whereas increasing valuesof m means less severe fading and the introductionof a line-of-sight (LOS) component.

Since the groundbreaking theoretical work of Gold-smith and Varaiya many researchers have madeimportant contributions to the understanding, designand performance analysis of link adaptation schemes.An exhaustive overview over most research contribu-tions up to 2002 can be found in [HAN02]. It is fairto say that the advantages, design choices, availabledegrees of freedom, and practical limitations of linkadaptation techniques have become well-known, andsuch techniques are gradually being included in sev-eral upcoming wireless communications standards.Some of the most important recent contributions inthe field deal with:

• Analysis of the effect of imperfect CSI at the trans-mitter and receiver side on the performance ofadaptive (coded and uncoded) modulation schemes[ØHH04, C&G04].

• Optimization of parameters in adaptive (coded)modulation schemes to counteract the effects ofimperfect CSI, and to maximize the average spec-tral efficiency for a given channel under variousconstraints [C&G04, HØA03, JØH04, D&Ø04].

• Combination with OFDM, antenna diversity, andMIMO concepts.

4.3 Multi-carrier modulation and access

Multi-carrier modulation and access in the shape ofOFDM (Orthogonal Frequency Division Multiplex-ing) is rapidly becoming the most viable option forspectrally efficient broadband modulation and usermultiplexing with easy handling of inter-symbolinterference (ISI). It is currently being included innew standards for both digital subscriber lines, digitalbroadcasting systems (DAB, DVB), and WLANs(e.g. the IEEE 802.11 and 802.16 standards). It is cur-rently considered a major candidate for modulationand access in B3G proposals around the world, inparticular for the downlink transmission [SVE03,K&S02].

Figure 5 An example of the achievable capacity of a realizable adap-tive coded modulation scheme, as a function of the signal-to-noise ratioon Nakagami-m channels (from [HOL02])

12 14 16 18 20 22 24 26

Average CSNR in dB

6.5

4.5

2.5

Sp

ec

tra

le

ffic

ien

cy

inb

its

/s

/H

z

1.5

3.5

5.5

m = 1

m = 2

m = 3

ISSN 0085-7130 © Telenor ASA 2004

Page 63: 100th Anniversary Issue: Perspectives in telecommunications

61Telektronikk 3.2004

In OFDM, a high-rate data stream is broken downinto many parallel, orthogonal data streams, each oflower rate. These lower-rate data streams (subchan-nels, subcarriers) are superimposed in time by meansof an inverse Discrete Fourier Transform (IDFT, typi-cally and efficiently implemented by means of a FastFourier Transform (FFT)) before transmission. Theycan be resolved, due to the orthogonality in frequencyimposed by using the IDFT, by performing the corre-sponding Discrete Fourier Transform (DFT) at thereceiver side. A block diagram of the transmitter inan OFDM-based communication system is shown inFigure 6, while the individual subchannel spectra isshown in Figure 7 (for the case of N = 5 subcarriersfor clarity – in practical OFDM applications the num-ber of carriers may typically be in the order of 50 –1000). The receiver will be the inverse of the trans-mitter in the sense that it first performs the inversetransform of the transmitter, and then parallel-to-serial converts in order to recover the original timesequence.

The advantages of OFDM are direct results of thesplitting of the signal into many distinct, low-rate,orthogonal subcarriers. When we have N subcarriersof equal bandwidth, the bandwidth and thus the trans-mission rate on each subcarrier is 1/Nth of that of asingle-carrier scheme with the same bandwidth asthat of the total OFDM scheme. This in turn meansthat the OFDM symbol length (in time) is N timeslonger than for a single-carrier scheme with the samebandwidth, which again implies that the radio chan-nel’s impulse response can be N times longer beforethe symbols start interfering with each other in time(inter-symbol inteference or ISI). Thus, for a givenchannel, an OFDM system will be much more robusttowards ISI than will a single-carrier system. As thenumber of subchannels increase for a given overallbandwidth, eventually each subchannel can be con-sidered as flat-fading, i.e. without any ISI.

OFDM is also a spectrally efficient technology,which has great flexibility in the sense that the indi-vidual subchannels can be individually bit and powerloaded, dynamically with respect to the instantaneousquality of each individual subchannel. That is, ifchannel state information is made available to thetransmitter, link adaptation can be performed foreach individual subchannel. Also, subcarriers can bedynamically allocated to different sources and usersin a multi-user, multimedia communications system,to maximally employ the available degrees of free-dom. Furthermore OFDM can be combined withMIMO technology, different kinds of space-timeprocessing, and different kinds of multiple accessschemes, e.g. multicarrier CDMA. Frequency diver-sity, stemming from having N (more or less) indepen-

dently fading subchannels. can be exploited by meansof coding across the subchannels. Finally, OFDM isrobust to narrowband interference since the totalsignal power is spread over a large bandwidth.

Of course a price has to be paid for all the aboveadvantages. OFDM systems typically exhibit a highPeak-to-Average Power Ratio (PAPR). The PAPR isdefined, as implied by its name, as the ratio betweenthe maximal and the average symbol power. Remem-ber that OFDM symbols are created by performing anIDFT, i.e. by summation of many weighted complexexponentials. In the rare, but still possible event thatall or most of the terms sum constructively (in phase)the result may be a very high amplitude. This mayresult in nonlinear distortion problems such as clip-ping noise. This is due both to finite word-lengtheffects and to the fact that high power amplifiers(HPAs) used in wireless systems are typically non-linear devices. The nonlinear distortion will degradethe BER performance and cause energy leakagebetween subchannels. There is currently a lot ofresearch effort worldwide on mitigating the effectsof high PAPR, e.g. by HPA linearization, and also

Figure 7 OFDM subcarrier spectra for N=5 subcarriers

Baseband

modulator

Data inSerial to

parallel

conversion

Inverse

discrete

Fourier

transform

OFDM

symbols

Figure 6 Block diagram of an OFDM transmitter

ISSN 0085-7130 © Telenor ASA 2004

Page 64: 100th Anniversary Issue: Perspectives in telecommunications

62 Telektronikk 3.2004

on signal processing and coding methods for PAPRreduction [RLP04, YFY03, N&L02, RJK02].

In traditionally designed (i.e. using IFFT and FFT)OFDM systems there is also some bandwidth lossdue to the insertion of overhead information in theform of a guard interval between symbols. The guardinterval is typically constructed by cyclic extension.It is there to avoid the “tail” of one OFDM symbolleaking into the next and thus causing ISI. The lengthof the guard interval must therefore be at least equalto the length of the channel impulse response.

However, as the number N of subchannels growlarger the overhead represented by the guard intervalwill become smaller and eventually insignificant,since the symbol length increases linearly in N whilethe necessary guard interval length remains constant.It is also possible to remove the need for a guardinterval completely by introducing pulse shaping intothe system, where the pulse shape is optimized for ISIremoval [V&H96]. The price paid for this modifica-tion is a computationally somewhat more complextransmitter and receiver, since straight-forward FFTalgorithms can no longer be used.

Lastly, OFDM systems rely on very accurate fre-quency and phase information to preserve the orthog-onality between subcarriers (and thus the ability toresolve the subchannels in the receiver). Thereforethey are more sensitive to frequency and phase off-sets than single-carrier systems. However, variouscompensation methods exist for mitigating this prob-lem; see e.g. [HSK01].

4.4 Iterative receiver processing

With the invention of Turbo codes by Berrou et al. in1993 [BER93], the field of error-control (channel)coding was suddenly revolutionized. After 50 yearsof constructing mathematically complex codes trying

to approach the information-theoretic performancebounds of Claude Shannon [ØI02], the coding com-munity was confronted by a scheme in which a sim-ple parallel concatenation of two basic recursive sys-tematic convolutional (RSC) codes, separated by aninterleaver (cf. Fig. 8), outperformed all elaboratecode constructions of previous times.

The improvements indicated by Berrou’s results werein fact so large compared to classical codes that someactually did not believe the results at first. However,the results were quickly confirmed by other researchgroups, and a period of intense research into turbocodes followed.

Many alternative turbo code constructions, includingserial concatenation instead of parallel concatenationof component codes, has since been proposed, and itis fair to say that turbo codes are now quite wellunderstood. Tools from graph theory have been par-ticularly useful in furthering the understanding ofthese codes [LOE04, KFL01] as well as that ofrelated codes such as the Low Density Parity Check(LDPC), or Gallager, codes. An excellent tutorialoverview of much of the fundamental turbo codingresearch from the period 1993 – 2002 can be found ina previous Telektronikk article by Ytrehus [YTR02].

The most significant difference between turbo codesand previous coding schemes is the decoding algo-rithm, which in the turbo case is iterative and exploitsso-called “soft” (non-quantized) information aboutthe correctness of decisions made during decoding.Two component decoders alternate in passing deci-sions to each other in an iterative fashion until theprocess converges.

The other distinguishing feature is the constructionof long, quasi-random codewords by interleaving ofsimple component codes. This feature makes the codeconstruction strongly resemble that of the “randomcoding argument” originally used by Claude Shannonin the proof of his channel coding theorem, whendemonstrating the existence of capacity-achievingcodes [SHA48]. Part of the genius of turbo codes isthat such codes can now be constructed while stillretaining a near-optimal decoding algorithm ofreasonable complexity, instead of having to usethe exhaustive nearest-neighbor search implicit inShannon’s construction.

The turbo-decoding algorithm is actually built on analgorithm for optimal decoding of linear codes, origi-nally published in the mid-70s and nicknamed the“BCJR” algorithm after its authors [BAH74]. Toimprove reliability by means of successive iterations,one exploits the fact that when a (preliminary) deci-

Figure 8 The parallel concatenated turbo codingscheme of Berrou et al. (encoder)

data dk

RSC code C1

Non

uniform

inter-

leaving

RSC code C2

Xk

Y1k

redundancy

dn

Y2k

ISSN 0085-7130 © Telenor ASA 2004

Page 65: 100th Anniversary Issue: Perspectives in telecommunications

63Telektronikk 3.2004

sion on a symbol is made during one of the iterations,the receiver is also able to compute the probabilityof this decision being correct or not. This is a conse-quence of the fact that the soft information is retainedthroughout the decoding and passed between thecomponent decoders. No hard decisions on the bitlevel are made before the algorithm converges. If thereliability of an intermediate decision is high (proba-bility of correctness close to 1 or 0), then the decisioncan be strongly relied upon in subsequent processing(i.e. further iterations), whereas if it is low (probabil-ity of correctness close to 0.5) then it is relied lessupon in further iterations.

Thus, simply put, the benefit of the iterations in theturbo-decoding algorithm is that they successivelyimprove the decision reliabilities with each iteration,until one is “almost certain” that a decision made ona codeword is correct. This decision is then finallyoutput from the receiver to the user. Figure 9 (from[O&R02]) shows a comparison between turbo codesand some classical error control codes, as well asuncoded communications, with regard to bit-errorrate performance versus energy spent per informationbit. It is seen that the turbo codes come significantlycloser than all the other codes to the theoretical limitindicated by the capacity curve.

In the last eleven years, ‘turbo’-based iterative algo-rithms and the use of ‘soft’ reliability informationhave in fact been shown to dramatically improve theperformance of not only pure channel coding, butalso joint multi-user detection and channel estimation[PAR04], multi-user decoding [CMT04], joint pilot-symbol-assisted coded modulation and channel esti-mation [LGW01], synchronisation [NOE03], equal-ization [KST04], interference suppression [WAN99],and so on. It can be used in the context of MIMO sys-tems, or with OFDM, CDMA, space-time codes, etc.The “Turbo principle”, as the rationale behind itera-tive receiver algorithms has been come to be called,is in fact a general principle that can be used in anysetting where two components of a communicationreceiver have the possibility of exchanging informa-tion about the reliability of their respective decisions.The information from one component is used as extrainput to the processing in the other, to improve thereliability of its decisions. This reliability informationcan in its turn be used in a similar fashion by the firstcomponent, and so on, until the process (in mostpractical cases) converges.

The price paid for the associated performance im-provements is (“as usual”) more complex receiverprocessing than is the case with non-iterativeschemes. However, with the emergence of ever morepowerful computing capabilities, this is already, in

many instances, a small price to pay compared to theperformance improvements experienced. This trendwill only be strengthened in the future.

Turbo coding is already being included in existingand upcoming standards for wireless communica-tions, such as the UMTS standard, and it is expectedthat the Turbo principle will be even more importantand find wider use in B3G systems.

4.5 Cross-layer optimization techniques

One of the foundations of modern digital communica-tions has been the layered protocol architecture.Such an architecture consists of a layered stack ofprotocol modules, each of which solves a specificproblem by using services provided by lower-layeredprotocols, and providing services to higher layers. Formaximum modularity and ease of design, communi-cation in this architecture typically happens mainlybetween adjacent layers, and is of limited scope.

A classical example [CRM04] is the Ethernet proto-col stack where the transport layer protocol TCP(Transport Control Protocol) overlays the networkprotocol IP (Internet Protocol), which in its turn over-lays the Ethernet link layer. At the bottom of the pro-tocol stack is the physical layer, by which everythingis physically implemented. The link layer providesconnectivity within one given network segment(sending and receiving frames to/from hosts), whilethe network layer delivers data packets across multi-ple networks. The transport layer adds connection-oriented reordering of packets, error recovery, flowcontrol, and congestion control. Each layer only usesinformation from the layer directly beneath it in theprotocol stack when operating.

Figure 9 Performance comparison between turbocoding, uncoded communications, and some classicalcoding schemes. Performance is measured as bit-error rate against energy spent per information bit(Courtesy of Nera Research)

101

102

103

104

105

106

107

0 2 4 6 8 10 12 14

Capacity limit

Uncoded

Sequential limit

Convolutional coding

Concatenated coding

DVD-RCS Turbo

ISSN 0085-7130 © Telenor ASA 2004

Page 66: 100th Anniversary Issue: Perspectives in telecommunications

64 Telektronikk 3.2004

As discussed repeatedly in previous sections, animportant aspect of wireless channels and networksis dynamic behaviour, such as temporal and spatialchanges in channel quality and user distribution. Aconventional protocol structure as described in theprevious paragraph is inflexible and unable to adaptto such dynamically changing network behaviours,since the various protocol layers can only communi-cate with each other in a strict and primitive manner.In such a case, the layers are most often designed tooperate under the worst conditions, rather than adapt-ing to changing conditions. This leads to inefficientuse of spectrum and energy.

An example is that the TCP protocol in its currentversion is designed to interpret all packet losses asa sign of network congestion, whereas in a wirelesssystem the losses might just as well be a consequenceof the channel going into a fade. TCP reacts to suchlosses by decreasing the source’s packet transmissionrate and thus lowering the network throughput unnec-essary in many cases. However, upcoming versionsof TCP are expected to feature Explicit CongestionNotification (ECN), in packets that are explicitlymarked (one particular header bit is set to 1) byrouters in the network each time a true congestionoccurs. Thus TCP may only reduce the source packettransmission rate when detecting such marked pack-ets [SRK03].

The concept of ECN, where TCP receives explicitinformation about network conditions on lower lay-ers, is a simplistic example of the novel designparadigm of cross-layer optimisation, or cross-layerdesign, and how such design may enhance the abilityof network protocols and applications to interact in amore intelligent manner and to observe and respondto wireless channel variations. Cross-layer design ingeneral simply means that transmission schemes andprotocols at the physical layer, link layer, networklayer, transport layer, application layer etc. areenabled to communicate more information betweenthemselves across layers, and that their operation –based on such information – in some sense may beadapted and optimized jointly.

In particular, knowledge of the wireless mediumgained by the physical and MAC layer (e.g. throughchannel estimation/prediction) is envisioned sharedwith higher layers. If exploited properly this knowl-edge may lead to a much more optimal allocation anduse of network resources [SRK03, CRM04, CPM04,T&G03, ZHE03].

As a more elaborate example, consider the design ofmedium access (MAC) protocols which are used toschedule multiple users’ access to the wireless uplink

channel, or the base station’s downlink transmissionof information to the users, in a cellular network.In such design one may improve the overall systemthroughput by taking into account the quality of thewireless channel and the available coding and modu-lation schemes. Assuming time-division multiplexingbetween users, and the use of link adaptation toexploit uplink capacity for each user, the overall net-work throughput can in fact be maximized by alwaysscheduling the user whose uplink channel has thehighest channel-to-noise-and-interference ratio(CNIR), and thus the highest channel capacity, of allthe users. The throughput gain of such a method overstraightforward and traditional “round-robin” time-division multiplexing of users (which is independentof the channel state of each user) is called the multi-user diversity gain. Effectively, the system attemptsto “ride the peaks” by transmitting only at highCNIRs, by switching dynamically and adaptivelybetween users depending on their channel quality.Examples of such mechanisms are already found inthe enhanced general packet radio service (EGPRS)of the GSM system and in the high data rate (HDR)versions of the CDMA2000 3G system [SRK03].

To this can also be added fairness constraints toascertain that all users are granted access to the chan-nel, e.g. after some prescribed maximum waitingtime, or some given percentage of the time, depend-ing on their individual needs and on whether theyactually do have anything to transmit at a given pointin time. One can also take into account different fad-ing statistics for different users, and different trafficclasses with different capacity requirements and het-erogeneous quality-of-service (QoS) constraints,when designing the MAC protocols.

The above are only selected examples of what cross-layer design can mean in practice. Other examplesinclude joint optimization of source and channel cod-ing schemes, and energy-optimization of coding andmodulation schemes taking circuit energy consump-tion into account as well as the transmission energy[CGB03]. The latter technique amounts to cross-layerdesign of the physical and link layer and may be ofparticular importance in short-range applicationswhere the circuit energy and the transmission energyconsumption may be comparable in size, and whereoverall energy efficiency is of the essence (such asin wireless sensor networks).

A more complete list of hot cross-layer designresearch topics on a more general level currentlyinclude:

• Throughput-optimized cross-layer control ofMIMO systems,

ISSN 0085-7130 © Telenor ASA 2004

Page 67: 100th Anniversary Issue: Perspectives in telecommunications

65Telektronikk 3.2004

• Space/time/frequency processing techniques forimproved multiple access schemes,

• Techniques for increased flexibility in spectrumuse,

• Feedback channel and signal design for efficientcross-layer interaction,

• Hybrid-ARQ schemes,

• Dynamic resource allocation and QoS issues,

• Channel-state-aware protocols,

• Stability and robustness issues in cross-layer inter-actions.

There are also many other aspects that might – andprobably will – be taken into account in future cross-layer design than those covered by the above exam-ples [CRM04]. Such aspects include security issues,which are outside the scope of this article.

4.6 Channel characterization and

implementation issues

In addition to the techniques presented so far, and asa prerequisite for many of them, the ability to accu-rately measure, model, characterise, estimate and pre-dict the properties of many different types of radiopropagation environments will continue to increasein importance with regard to successful design andoptimisation of novel and more advanced radio inter-faces. Channel knowledge is essential, e.g. for linkadaptation and opportunistic multiuser schedulingalgorithms.

As an example, it is illustrated in Figure 10 how thebit-error rate (BER) of an adaptive coded modulationsystem degrades at low signal-to-noise ratios andhigh feedback channel delays due to the fact that thechannel state information used to adapt the transmit-ter mode then becomes too inaccurate [ØHH04].

Measurement campaigns, development of quantita-tive criteria for channel model optimization, statisti-cal methods for fitting of observed channel data tosimulation models, and procedures for optimizationand characterization of model parameters are allimportant tools which can help us in predicting the“true” performance of a given transmission scheme,or when optimizing parameters in such schemes forthe best possible performance, or when establishingtheoretical bounds for system performance. The bet-ter the theoretical model or simulation model, themore accurate system performance predictions andsystem design choices can be made before going to

the expensive and time-consuming task of buildingprototypes.

Finally, it should also be mentioned that the develop-ment and commercialisation of new equipment forradio communication will call for the developmentof hardware with improved performance and reducedcomponent costs. In order to avoid specification ofcommunication standards that are difficult or expen-sive to implement in hardware, it is important tomaintain a useful dialogue between engineers work-ing at different levels of the standardisation process.

5 Summary and conclusionsIn the present paper we have described some of themajor radio interface and access technologies that webelieve will become of the most importance for futurewireless and mobile communication systems beyond3G (B3G). We have only given an overview withoutgoing deeply into technical details; however, anextensive and up-to-date reference list has beenprovided so the interested reader can know whereto go for more information on a particular topic.

We have tried to highlight the main advantages (and,if any, disadvantages and/or limitations) of each com-ponent technology described. We have mainly cov-ered multiple-input multiple-output (MIMO) technol-ogy, link adaptation techniques, multicarrier modula-tion/OFDM, iterative (“turbo”) receiver processing,and cross-layer design methodologies, while alsobriefly touching upon channel characterization andimplementation issues.

Figure 10 BER degradation in an adaptive coded modulation systemdue to faulty channel state information at low signal-to-noise ratios andlarge feedback delays. The target bit rate when designing the systemwas 10–4

-2.0

-2.5

-3.0

-3.5

-4.0

-4.5

-5.00

20

30

40

0.25

0.2

0.15

0.1

0.05

0

10

Expected subchannel CSNRNorm

alized tim

e delay

log

10(B

ER

)

ISSN 0085-7130 © Telenor ASA 2004

Page 68: 100th Anniversary Issue: Perspectives in telecommunications

66 Telektronikk 3.2004

There are of course other interesting research topicsin wireless communications which could have beencovered here as well. In particular ultra-widebandradio communications is currently a hot topic in theinternational research community, due to its potentialfor providing high-capacity, low-power transmissionover very short ranges without interference limita-tions. However, the scope must be limited in someway and we believe the selection made reflects themajor importance, maturity of understanding, andlarge potential of the chosen topics relative to others.

The impact of the 3G technology, which is currentlyunder deployment, is still uncertain. As we have seen,B3G technology is still at the research stage, and willnot be available until at least after 2010. However,through the use of techniques described in this paper,B3G has an enormous potential for increase of capac-ity, and is expected to have a very large impact onfuture mobile communications.

References[4GMF] 4G Mobile Forum. September 28, 2004.[online] – URL:http://www.delson.org/4gmobile/intro.htm

[Ala98] Alamouti, S M. A simple transmit diversitytechnique for wireless communications. In: IEEEJournal on Selected Areas in Communications, 16,1451–1458, 1998.

[BAH74] Bahl, L R et al. Optimal decoding of linearcodes for minimizing symbol error rate. In: IEEETrans. Inform. Theory, 20 (2), 284–287, 1974.

[BER93] Berrou, C, Glavieux, A, Thitimajshima, P.Near Shannon limit error correcting coding anddecoding. In: Proc. ICC-1993, Geneva, Switzerland,May 1993, 1064–1070.

[C&G04] Cai, X, Giannakis, G B. Adaptive PSAMAccounting for Channel Estimation and PredictionErrors. To appear in IEEE Trans. Wireless Comm., 2004.

[CEL1] The CELTIC initiative – Cooperation for aEuropean sustained Leadership in Telecommunica-tions. CELTIC Purple Book, Part two: TechnicalScope of the CELTIC Initiative. September 28, 2004.[online] – URL: http://www.celtic-initiative.org/Publications/purple-book.asp

[CGB03] Cui, S, Goldsmith, A J, Bahai, A. Energy-constrained modulation optimization for coded sys-tems. In: Proc. GLOBECOM 2003, 372–376, Dec. 2003.

[CMT04] Caire, G, Müller, R R, Tanaka, T. IterativeMultiuser Joint Decoding: Optimal Power Allocationand Low-Complexity Implementation. In: IEEETrans. Inform. Theory, 50 (9), 2004.

[CPM04] Chan, Y S, Pei, Y, Modestino, J W. Oncross-layer adaptivity and optimization for multime-dia CDMA mobile wireless networks. In: Proc. 1stInternational Symposium on Control, Communica-tions, and Signal Processing, March 2004, 579–582.

[CRM04] Carneiro, G, Ruela, J, Ricardo, M. Cross-Layer Design in 4G Wireless Terminals. In: IEEEWireless Communications, April 2004, 7–13.

[D&Ø04] Duong, D V, Øien, G E. Adaptive trellis-coded modulation with imperfect channel state infor-mation at the receiver and transmitter. In: Proc. NordicRadio Symposium 2004, Oulu, Finland, Aug. 2004.

[F&G98] Foschini, G J, Gans, M J. On Limits ofWireless Communications in a Fading Environmentwhen Using Multiple Antennas. In: Wireless Per-sonal Communications, 6, 311–335, 1998.

[Fos96] Foschini, G J. Layered Space-Time Architec-ture for Wireless Communications. In: Bell LabsTechnical Journal, 1, 41–59, 1996.

[G&A02] Gesbert, D, Akhtar, J. Breaking the Barri-ers of Shannon’s Capacity: An Overview of MIMOWireless Systems. In: Telektronikk, 98 (1), 53–64,2002.

[G&V97] Goldsmith, A J, Varaiya, P P. Capacity ofFading Channels with Channel Side Information. In:IEEE Trans. Inform. Theory, 43 (6), 1986–1992,1997.

[HAN02] Hanzo, L, Yee, M S, Wong, C-H. AdaptiveWireless Transceivers. New York, Wiley, 2002.

[HHØ00] Hole, K J, Holm, H, Øien, G E. AdaptiveMultidimensional Coded Modulation over Flat Fad-ing Channels. In: IEEE J. Select. Areas Commun.,18 (7), 1153–1158, 2000.

[HOL02] Holm, H. Adaptive Coded Modulation Per-formance and Channel Estimation Tools for FlatFading Channels. Trondheim, Norway, NTNU, April2002. PhD thesis 2002:18.

[HSK01] Han, D-S, Seo, J-H, Kim, J-J. Fast CarrierFrequency Offset Compensation in OFDM Systems.In: IEEE Trans. Consumer Electronics, 47 (3),364–369, 2001.

ISSN 0085-7130 © Telenor ASA 2004

Page 69: 100th Anniversary Issue: Perspectives in telecommunications

67Telektronikk 3.2004

[HØA03] Holm, H et al. Optimal design of adaptivecoded modulation schemes for maximum averagespectral efficiency. In: Proc. IEEE SPAWC 2003,Rome, Italy, June 2003.

[ITU1] ITU. Framework and overall objectives of thefuture development of IMT-2000 and systems beyondIMT-2000. Geneva, ITU, 2003. ITU Recommenda-tion ITU-R M.1645. URL: http://www.itu.int/rec/recommendation.asp?type=folders&lang=e&par-ent=R-REC-M.1645

[ITU2] ITU. General focus areas for research andfurther study for the future development of IMT-2000and systems beyond IMT-2000. September 28, 2004.[online] – URL: http://www.itu.int/ITU-R/study-groups/rsg8/rwp8f/docs/focus-areas.doc

[JØH04] Jetlund, O et al. Spectral efficiency boundsfor adaptive coded modulation with outage probabil-ity constraints and imperfect channel prediction. In:Proc. Nordic Radio Symposium 2004, Oulu, Finland,Aug. 2004.

[KAR03] Karlson, B et al. Wireless Foresight: Sce-narios of the Mobile World in 2015. Chichester, UK,Wiley, 2003.

[KFL01] Kschischang, F R, Frey, B J, Loeliger, H-A.Factor Graphs and the Sum-Product Algorithm. In:IEEE Trans. Inform. Theory, 47 (2), 498–519, 2001.

[KST04] Koetter, R, Singer, A C, Tuchler, M. TurboEqualization. In: IEEE Signal Processing Magazine,21 (1), 67–80, 2004.

[K&S02] Kaiser, S, A. Svensson, A (eds.). Broad-band Multi-Carrier Based Air Interface. WWRFWorking Group 4 White Paper, version 1.4. In: Proc.7th Wireless World Research Forum Workshop,Eindhoven, the Netherlands, Dec. 2002.

[LGW01] Li, Q, Georghiades, C N, Wang, X. AnIterative Receiver for Turbo-Coded Pilot-AssistedModulation in Fading Channels. In: IEEE Comm.Letters, 5 (4), 145–147, 2001.

[LOE04] Loeliger, H-A. An Introduction to FactorGraphs. In: IEEE Signal Processing Magazine, 21(1), 28–41, 2004.

[M&N71] Murakami, S, Nakagami, M. Adaptivefeedback communication system in a fading environ-ment. In: Memoirs of the Faculty of Engineering,Kobe University, 17, 127–147, 1971.

[NEW] Description of the pan-European Network ofExcellence on Wireless COMmunications (NEW-COM). September 28, 2004. [online] – URL:http://commgroup.polito.it/newcom/

[NOE03] Noels, N et al. Turbo synchronization: anEM algorithm interpretation. In: Proc. ICC 2003, 4,2933–2937, May 2003.

[N&L02] Nikookar, H, Lidsheim, K S. RandomPhase Updating Algorithm for OFDM Transmissionwith Low PAPR. In: IEEE Trans. Broadcasting,48 (2), 123–128, 2002.

[O&R02] Orten, P, Risløw, B. Theory and Practice ofError Control Coding for Satellite and Fixed RadioSystems. In: Telektronikk, 98 (1), 78–91, 2002.

[PAR04] Park, S Y, Kim, Y G, Kang, C G. Iterativereceiver for joint detection and channel estimation inOFDM systems under mobile radio channels. In:IEEE Trans. Veh. Tech., 53 (2), 450–460, 2004.

[RJK02] Ryu, H-G, Jin, B-I, aKim, I-B. PAPRReduction Using Soft Clipping and ACI Rejection inOFDM System. In: IEEE Trans. Consumer Electron-ics, 48 (1), 17–22, 2002.

[RLP04] Ryu, H-G, Lee, J-E, Park, J-S. DummySequence Insertion (DSI) for PAPR Reduction in theOFDM Communication System. In: IEEE Trans.Consumer Electronics, 50 (1), 89–94, 2004.

[SHA48] Shannon, C E. A Mathematical Theory ofCommunication. In: Bell Syst. Techn. J., 27, 379–423and 623–656, 1948.

[SPEC] Staple, G, Werbach, K. The End of SpectrumScarcity. IEEE Spectrum, March 2004.

[SRK03] Shakkottai, S, Rappaport, T S, Karlsson, PC. Cross-Layer Design for Wireless Networks. In:IEEE Communications Magazine, 74–80, Oct. 2003.

[SVE03] Svensson, A et al. An OFDM Based SystemProposal for 4G Downlinks. In: Proc.Multi-Carrier

Spread Spectrum Workshop, Oberpfaffenhofen, Ger-many, Sept. 2003.

[S&W91] Steele, R, Webb, W T. Variable rate QAMfor data transmission over mobile radio channels. In:Proc. WIRELESS ’91, Calgary, Alberta, Canada,1–14.

ISSN 0085-7130 © Telenor ASA 2004

Page 70: 100th Anniversary Issue: Perspectives in telecommunications

68 Telektronikk 3.2004

[Tar99] Tarokh, V, Jafarkhani, H, Calderbank, R.Space-Time Block Codes from Orthogonal Design.In: IEEE Trans. Inform. Theory, 45, 1456–1467,1999.

[T&G03] Toumpis, S, Goldsmith, A J. Performance,optimization, and cross-layer design of media accessprotocols for wireless ad hoc networks. In: Proc.ICC-03, 3, 2234–2240, May 2003.

[V&H96] Vahlin, A, Holte, N. Optimal Finite Dura-tion Pulses for OFDM. In: IEEE Trans. Commun.,44 (1), 10–14, 1996.

[WAN99] Wang, X, Poor, H V. Iterative (Turbo) SoftInterference Cancellation and Decoding for CodedCDMA. In: IEEE Trans. Commun., 47 (7),1046–1061, 1999.

[WHS91] Webb, W T, Hanzo, L, Steele, R. Band-width efficient QAM schemes for Rayleigh fadingchannels. In: IEE Proceedings I: Communications,Speech, and Vision, 138 (3), 169–175, 1991.

[WWRF] Wireless World Research Forum (WWRF).September 28, 2004. [online] – URL:http://www.wireless-world-research.org/

[YFY03] Yusof, S K, Fisal, N, Yin, T S. ReducingPAPR of OFDM signals using partial transmitsequences. In: Proc. 9th APCC 2003, 1, 411–414,Sept. 2003.

[YTR02] Ytrehus, Ø. An Introduction to TurboCodes and Iterative Decoding. In: Telektronikk,98 (1), 65–77, 2002.

[ZHE03] Zheng, H. Optimizing wireless multimediatransmission through cross layer design. In: Proc.ICME ’03, 1, I-185–8, July 2003.

[ØHH04] Øien, G E, Holm, H, Hole, K J. Impact ofChannel Prediction on Adaptive Coded ModulationPerformance in Rayleigh Fading. In: IEEE Trans.Veh. Tech., 758–769, May 2004.

[ØIE02] Øien, G E. Information Theory: The Foun-dation of Modern Communications. In: Telektronikk,98 (1), 3–19, 2002.

Geir E. Øien (38) received his MSc and PhD degrees from the Norwegian Institute of Technology (NTH) in

1989 and 1993, respectively. From 1994 until 1996 he worked at Stavanger University College as associate

professor. Since 1996 he has been with the Norwegian University of Science and Technology, since 2001 as

full professor of information theory. Prof. Øien is a member of IEEE and the Norwegian Signal Processing

Society. He is author/co-author of more than 50 research papers in refereed international fora. His current

research interests are in information theory, communication theory, and signal processing, with emphasis on

analysis and design of wireless communication systems.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 71: 100th Anniversary Issue: Perspectives in telecommunications

69

IntroductionAccording to statistics collected by Statistics Norwayin 2003, 100 % of the teens aged 16 – 20 they inter-viewed in their media use survey had a mobile tele-phone. That is to say, Norway’s best organization inthe area of survey research was not able to turn upteens without mobile telephones. This finding is asastonishing both in the speed with which the transi-tion has come and the omnipresence of the technol-ogy. Only five years previous to this, a minority ofthe teens had a mobile telephone. However, in theintervening period the device had established itselfinto the teen identity as securely as any other artifact.The same survey showed that 86 % of all Norwegiansover the age of 8 had a mobile telephone.

By way of a somewhat poor comparison, the tele-phone took 60 years to reach 80 % household pene-tration in the US. Electric lights took 30 years and theautomobile took 60 years to reach 80 % penetrationin the US. The radio reached an 80 % adoption rate inabout 16 years starting in 1920. The TV took 12 yearsto reach 80 % and about 35 years to reach nearly uni-versal adoption (Fischer 1992).

These are poor comparisons because many of theseinnovations required the development of extensiveinfrastructure where GSM telephony could in somerespects hitch a ride on the pre-existing landline tele-phony system. Another difference is that the statisticscited by Fischer are for household adoption, not per-sonal adoption. In this respect the figure of 86 %represents a much higher rate of penetration than thehousehold items described above.

In this article I am interested in looking into the adop-tion and use of mobile telephony over the past decadein Norway. From the commercialization of GSM –which happened in 1993 – until the present, there has

been a dramatic shift in the adoption and use ofmobile telephones. Norway and indeed Scandinaviaare among those areas where this revolution has beenthe most intense. The mobile telephone has becomecommonplace across Europe, in Japan, Korea, Israeland in some other portions of the world. This revolu-tion has been visible in North America and it hasbeen relatively unknown in large portions of Asia(outside south eastern Asia) and particularly inAfrica. Thus, the experience of Scandinavia is likelyto provide insight into how mobile communicationwill change society in other areas of the world.

Interestingly, the adoption of mobile telephony hasnot been uniform even within Norway. Varioussocio-demographic groups have, at various times,gone through the adoption process. In the early stagesof adoption it was often business persons – as seen inthe stereotype of the yuppie – and perhaps deliverypersons who were the adopters. As the system devel-oped, and as alternative subscription systems weredeveloped, teens and young adults started to own anduse mobile telephones. As will be shown below, thereare relatively few people in Norway (about 7 per 100)who have no access to a mobile telephone. In almostall other cases there is at least the ability to borrow amobile telephone on an irregular basis.

With this dramatic change, one can also see elementsof stability as well as various forms of social conse-quences. On the one hand there are gendered use pat-terns that, in some ways, mirror the broader genderpatterns in society. At the same time, there are vari-ous social consequences of mobile telephony thatinclude the enhanced ability to coordinate activities,the provision of security, the development of identityand the strengthening of social networks.

The adoption, use and social consequences ofmobile communication

R I C H L I N G

Rich Ling is a

sociologist at

Telenor R&D

Telektronikk 3.2004

In 2003, 100 % of Norwegian teens between the age of 16 and 19 had a mobile telephone. This sur-

prising statistic points to the rapid adoption of a technology that was only marginally commercialised

when these teens were born in the mid to late 1980s. This paper is based on analysis of survey data

that has been collected by Statistics Norway in cooperation with Telenor and surveys conducted by

Telenor. Over the period described here as many as 10,000 individuals have participated in the

various surveys. I will look at some of the watershed transitions associated with mobile telephony,

including its adoption into the broader society, teens’ enthusiastic embrace of the technology and

the rise of SMS – led by teens. The paper also examines some of the dynamics that have not changed

including general time use on the phone, and the gendering of the technology. Finally the article

considers some of the broader social consequences of mobile telephony.

ISSN 0085-7130 © Telenor ASA 2004

Page 72: 100th Anniversary Issue: Perspectives in telecommunications

70 Telektronikk 3.2004

Watersheds in the use of mobiletelephony

The general adoption of GSM

The dominating trend over the last decade is thewidespread adoption of GSM telephony (see Figure1). Access to this form of communication has pene-trated into the Norwegian everyday life. It was reallythat system that popularized the mobile telephone inEurope and in many parts of the world. On a worldwide basis about 7 out of 10 mobile telephones useGSM. GSM telephones were often the telephones thatallowed people to make calls whenever and whereverthey wished. However, there have been at least twotransitions previous to the commercialization of GSMtelephony, namely manual mobile telephony and theNMT system.

Manual mobile telephony was available in Norwayfrom the late 1960s. Using this system the caller hadto call a telephone operator and request that a callbe set up with the desired party. Starting in the early1980s, however, there was the commercialization ofthe NMT system. It represented one of the first cellu-lar systems wherein one could “roam” from onecountry to another, albeit within Scandinavia. Duringthe mid 1980s the growth rate for NMT telephoneswas well over 150 % per year.

NMT (and other systems of its ilk) were those thatwere adopted by yuppies and other highflying users.The system was relatively expensive and the equip-ment was bulky and awkward to use. It often meantthat one used a car-mounted device, or a large suit-

case sized portable unit that was filled mostly withbatteries. Eventually, the size of the devices wasreduced to where it was more similar to a smallishloaf of bread. The actor Michael Douglas playingGordon Gekko, the rogue financer in the film WallStreet, outlined the use of these devices in the popularimagination. Gekko is pictured calling to the young,uninitiated stock trader from the beach outside hishome in the Hamptons. Gekko, of course uses one ofthe early hand held mobile telephones for the call.The imagery is that of the well-heeled stock magnatewho can afford to use the latest technology. Interest-ingly, within a decade, the mobile telephone wouldbe available to literally all persons, at least all Scandi-navians, at absurdly low prices. Thus, the early imageof the mobile telephone as a plaything of the richgave way to the consumption of certain types ofdevices as opposed to simple consumption.

Indeed it was not until the mid 1990’s and the intro-duction of GSM that the subscription rates for theNMT system began to fall. The NMT system stilllives a marginal life at the time of this writing. Thosewho wish to have the maximum level of coveragesuch as hunters and those who use the wildernessfavor it. However, NMT will eventually be decom-missioned and the band width reallocated to otheruses.

From 1993 it has been the GSM system that has dom-inated the market for mobile telephony. During theperiod of introduction the growth rates were trulyphenomenal. In Denmark, TeleDanmark hoped forabout 15,000 users in 1993 but was able to sell more

Figure 1 The rise of mobile telephony in Norway 1969 – 2003

1000

2000

3000

4000

Year

Number of subscriptions x 1000

0

1970 1975 1980 1985 1990 1995 2000

GSM pre-paid

GSM Post-paid

NMT

Manual

ISSN 0085-7130 © Telenor ASA 2004

Page 73: 100th Anniversary Issue: Perspectives in telecommunications

71Telektronikk 3.2004

than four times as many. The same was reported bySonofon (Haddon 1997). Indeed the growth rates forNorway were on pace with those of the other Scandi-navian countries. As noted above, this transition maybe one of the swiftest adoption curves for a majortechnology. Where the car, the traditional telephone,the TV and even the PC/Internet have taken years anddecades to be adopted, the mobile telephone seems tohave been swallowed whole. GSM dominates allother systems and has become the de facto worldstandard for mobile telephony.

The final transition shown in Figure 1 is the growthof post-paid GSM subscriptions. Starting in the late1990s this subscription form became one of the mostpopular ways to start using a GSM telephone. Thesubscriber could purchase a complete package in-cluding the mobile terminal, a SIM card and a certainamount of pre-paid use in a simple package. Thesepackages have been sold from newsstands, grocerystores and even from venders on the street. The trafficcosts per minute for prepaid subscriptions are rela-tively high, but at the same time the subscriber doesnot need to pay any fixed subscription fee everymonth. Thus, these subscriptions are ideal for thosewho have only occasional use for a mobile telephonesince they can use the mobile phone as needed, butthey do not need to pay the fixed subscription fee.The other group that has welcomed this subscriptionform is younger teens. It is perhaps the most commonform for this group. Somewhere around 70 % of all13 – 20 year olds use this type of subscription. As theteens move out of their parents’ home and establish alife of their own, they also migrate over to a tradi-tional post-paid subscription. However, approxi-mately 40 % of the women and 20 % of the men inthe 20 – 60 year age group, and about half of retireeshave a pre-paid subscription.

Pre-paid subscriptions have an immediate appeal forparents who wish to control the use of their child’smobile telephone. In addition, parents often use thepre-paid card as an early object lesson in letting theteen earn the money they need to support their mobiletelephone habit. Another element in the teen’s adop-tion of mobile telephony was access to inexpensive –and heavily subsidized – terminals. The establishmentof prepaid subscriptions and the general access toinexpensive terminals resulted in the growth of theteen market as we will see below.

The final feature of the chart is the flattening of thecurve after 2000. The material shows that we areapproaching the absolute number of persons in Nor-way. There are approximately 4.58 million people inthe country and there are slightly more than 4 milliontelephone subscriptions. There are often subscriptions

associated with functions (ambulances, payment tem-inals, etc) and there are cases of people having multi-ple subscriptions. In addition there are some groupsthat are not part of the mobile revolution, notably theelderly and the young pre-teens. Thus, every personwill not necessarily have a subscription. However, thetotal number of subscriptions could easily exceed thepopulation, particularly if devices such as vendingmachines, heating systems in cottages and the likereceive their own subscriptions. Indeed in Luxem-bourg and in Taiwan there are more mobile subscrip-tions than there are people.

Teens and their adoption of the mobile

telephone 1997 – 2001

The year 1999 was a watershed year when thinking ofteens’ access to the mobile telephone. Our data showsthat in 1997 there were relatively few teens whoreported owning a mobile telephone (see Figure 2).This was in the period before the development ofpre-paid subscriptions and handsets were relativelyexpensive. Thus, ownership was generally reservedfor those teens who had jobs and who were speciallyinterested in mobile communication. This oftenmeant that it was the males who had mobile tele-phones, presumably not only because they had jobsbut because they were the ones interested in mobiletelephony. During this period there were significantlymore males who reported ownership.

Two years later the adoption rates were much higherand gender differences had been eliminated. Thematerial here shows that the actual profile of thecurve had changed between 1997 and 1999. Wherefewer than 20 % of the middle aged teens had amobile telephone in 1997, this had risen to over 70 %in the interim. The difference was no longer betweenthe mid and older teens, but between the younger andthe middle aged teens. Interestingly, the gender gaphad also been erased.

Finally, in 2001 the great preponderance of teensbetween 13 and 20 had a mobile telephone. In thislast period the gender gap had again opened up, how-ever this time it was the females who were the mostenthusiastic consumers. In 2001 there were signifi-cantly more female than male teens with a mobilephone. By way of interpretation there are severalissues at play. Early in the adoption cycle it seemsthat there was a fascination with the technology thatmotivated adoption. This technical relationship to thedevice seems to be more often developed amongmales than females. As the adoption process proceedsthe mobile telephone goes from being seen as a tech-nical device to being a tool to support social interac-tion. Where males seemingly excel with technologies,many have commented that women are often facile

ISSN 0085-7130 © Telenor ASA 2004

Page 74: 100th Anniversary Issue: Perspectives in telecommunications

72 Telektronikk 3.2004

when considering social interaction. Indeed womenare often better at the mechanics of conversation andsocial interaction. In addition, women often have theresponsibility to organize social dealings.

Thus, to trace the gendered adoption of the mobiletelephone is to trace its transition from being a tech-nical fascination that is perhaps a status symbol tobeing a tool to support social interaction.

Teen adoption of SMS in the late 1990s and

adult adoption in 2003

Teen users were among the first to adopt the use ofSMS on a regular basis. Already in 1998 teens withmobile telephones were sending two to three SMSmessages per day (see Figure 3). Thus, along withtheir adoption of the mobile telephone, there was thealmost immediate discovery of SMS as a way to com-municate. Some of this may be a legacy from theextended use of pagers by teens. Immediately previ-ous to their adoption of the mobile telephone andSMS, the use of pagers represented the leading edgeof teen technology. Teens had developed variousforms of sending coded messages to one anotherusing the landline telephone system and pagers. Thus,a “3” could mean that the gang is meeting now, a “6”might mean that I cannot come this evening etc. The

discovery and adoption of mobile telephones alongwith the discovery of the SMS system provided atwo-way version of this form of signaling. An addedadvantage was that originally the system was free ofcharge. This meant that it was a natural channel ofcommunication. The teen needed only to learn how toaccess SMS and learn the somewhat clumsy form oftext entry.

Teens took SMS into use quite quickly. The materialin Figure 3 shows that between 1998 and 2003 thenumber of SMS per day had more than doubled. Inaddition, it shows that use had been largely concen-trated among teens. This again underscores the situa-tion seen in Figure 3 where it was teens and youngadults who were the most likely to use SMS ona daily basis. Here, looking at the actual number ofSMS messages being sent on a normal day, teenswere the most active. Until 2003 the curve describingthe number of SMS being sent had been characterizedby having a rather steep peak around the late teen andyoung adult stage. As the reader follows the curvedescribing the use by adults and the elderly it fallsdramatically. Thus, in many respects the use of SMSwas highly focused around teen culture. It was not apart of adults’ repertoire of communication channels.Teens and young adults were more likely to use SMS

0

20

40

60

80

100

15 17 19

Age

Percent adoption

Male, '97

Female '97

Male '99

Female '99

Male '01

Female '01

13

Figure 2 The adoption of mobile telephones by teens, 1997 – 2001

ISSN 0085-7130 © Telenor ASA 2004

Page 75: 100th Anniversary Issue: Perspectives in telecommunications

73Telektronikk 3.2004

on a daily basis and they were likely to use it manytimes during the day. The same was not true of othergroups.

This situation however is evolving. The data from2003 shows that the dramatic fall in daily use foradult groups has been eliminated. Rather than a directfall, there is more of a gradual downward curve. Thesame is true when considering the number of SMSmessages sent. In 2003 adults have gone from send-ing one or perhaps two SMS messages per day tosending 3 – 4 messages.

The situation in 2004What is the situation with regard to the use of mobilecommunication in Norway anno 2004?

Access to mobile telephony in Norway –

Increasing personalization

By almost any measure it is clear that Norway is amature market when it comes to mobile telephony.As of the start of 2004, 87 % of all persons over theage of nine reported personally owning a mobile tele-phone. In 1999 slightly less than six of ten reportedowning a mobile (see Figure 4). Looking at a some-what broader measure, in 1999 about 80 % of thepopulation had some form of access to a mobile tele-phone.1) They either reported owning one or havingaccess to a mobile telephone that was perhaps sharedwith outers. Thus, in 1999 58 % owned a mobile tele-

phone alone, about 22 % had shared access and theremaining 20 % reported having no access at all.

These numbers had changed by 2003. In that year87 % had exclusive ownership of a mobile telephone.In addition about 7 % had some form of sharedaccess and the remaining 6 % reported having noaccess. There are two general trends here. The firstis the growth in ownership and the second is the in-creasingly individualized ownership of mobile tele-phones. In countries where the adoption rates arelower, it is far more common to have a shared“family” mobile telephone, particularly among theyoungest and the oldest portions of the population.However, as the adoption rate increases and as usebecomes more common, there is the growing sensethat a particular device belongs to a single individual.Many of the features of the handsets and the subscrip-tions support this latter interpretation. Having a per-sonalized calling list, a log of one’s own SMS mes-sages, various personal information on the telephonein effect personalizes the device. In addition, as fea-tures such as MMS, electronic payment and variousPDA functions become more common, the mobiledevice will also become increasingly like a woman’spurse or a man’s pocket book; that is, it will becomea personal effect and a repository for our personalaffairs.

Beyond the functional dimensions of the mobiledevice, there is also a style related aspect. The owner-

Figure 3 The mean number of SMS messages per day 1998 – 2003 by age and gender

1) These statistics are in reality for all the population that is over 9 years of age.

0

1

2

3

4

5

6

7

8

9

10

Age group

Mean number of SMS per day

13 19 25 31 37 43 49 55 61 67 73 79

1999

2001

2002

2003

ISSN 0085-7130 © Telenor ASA 2004

Page 76: 100th Anniversary Issue: Perspectives in telecommunications

74 Telektronikk 3.2004

ship and display of a mobile telephone is, and willcontinue to be used as a type of fashion accessory.It will be selected, displayed and interpreted as anextension of the self, just as jewelry, clothing andother accessories are. Again, this plays on the grow-ing personalization of the device.

Number of calls per day

The material from SSB shows that men use both thelandline telephone and the mobile telephone moreoften than women (see Figure 5). In addition thematerial shows that women talk longer on the land-

line telephone, but not on the mobile. Starting withthe number of calls per day men generally call moreoften than women (4.3 vs 3.5 times per day, respec-tively). In addition men are more inclined to use themobile telephone when they call. In general menplace about half of their private calls via a mobiletelephone while women use the mobile telephone inabout a third of all cases.

These things vary somewhat with age (see Figure 5).For both men and women, and considering both land-line and mobile telephony, the golden age of use is

Figure 4 The individualization of the mobile telephone in Norway, 1999 – 2003

0

20

40

60

80

100

1999 2000 2001 2002 2003

Year

Percent

No access

Occational access

Regular access

Exclusive access

0

1

2

3

4

5

9 - 12 13 - 15 16 - 19 20 - 24 25 - 34 35 - 44 45 - 54 55 - 67 over 67

Age group

Mean number of calls

Men landline

Men mobile

Women landline

Women mobile

Figure 5 Number of calls per day (all platforms) and mobile

ISSN 0085-7130 © Telenor ASA 2004

Page 77: 100th Anniversary Issue: Perspectives in telecommunications

75Telektronikk 3.2004

the late teen years and the early 20s. It is during thisdecade in one’s life that the telephone seems to be themost central, at least when thinking of the number ofcalls made. Males reported making about six calls aday during this period and women reported makingabout five. Roughly eight of ten calls are mobilebased for men and slightly more than seven of ten aremobile based for women during this period of life.

The intensity of calling and the fact that it is largelymobile based is not surprising in many respect. It isin this period of life that one is quite nomadic. Oftenteens and young adults are in transition from thehomes of their parents into either student life or intothe early portions of their careers. In addition, part-nerships are often in flux and there is not the rou-tinized stability that one finds in the earlier portionsof one’s life or in those portions that follow thisperiod. Teens and young adults do not necessarilyhave the convenience of having their most intimatesphere living in the same home and thus there is aneed to contact others via the telephone in order tocoordinate activities and maintain friendships. Inaddition, their nomadic life style means that mobiletelephony fits well into their daily routines.

In the 9 – 15 age group one often operates within arelatively bounded range of movement, and particu-larly while one is in elementary school. During thisperiod the stirrings of emancipation have only begun.The child operates within the local sphere and there islittle need for telephonic contact on any broad scale.In a different way the stability of middle-aged lifemeans that there is less need for telephonic contactwith one’s peer group. The most intimate sphere isoften equivalent to those who live within the fourwalls of the home. In addition, the pressures ofscheduling within various organizations (work, daycare, children’s free time activities etc) mean thatthere is a regularity to one’s scheduling. There is notthe same room for free renegotiation of activities thatperhaps characterizes the lives of teens and youngadults. This is reflected in the number of calls made.While there are many calls being made in the privatesphere, the volume is approximately a third less.

Interestingly, the number of landline calls drops dur-ing the “nomadic” period. The drop for both sexes isabout one call per day. However, this group is stillabout two calls per day over the average of otherswhen considering the use of mobile telephone. Thus,the number of mobile calls more than makes up forthe drop in landline calls. In addition, a certain per-cent of this group do not have access to a landlinetelephone.

Length of the calls

Where men call more often, women speak longerwhen they have first called, particularly in the caseof landline telephony (see Figure 6). The other interest-ing aspect in terms of the time spent on the telephoneis that – with some exceptions – there is not the same“golden age” during the late teens and young adult-hood that one finds when looking into the number ofcalls. In general women report using the telephone justabout 30 minutes a day where men say that use it about20 minutes. There is a rapid rise in voice based tele-phony use when considering the difference betweenthe youngest users and the late teen users. The datashows that teen girls between 16 and 19 report usingthe longest time on the telephone in general and alsoon the mobile telephone. The older female usersreported somewhat lower use, particularly when con-sidering their landline use. The other interesting issuehere is that mobile telephony is used in about equalparts with landline telephony among men. However,aside from the teen respondents, women use abouttwice as much time on the landline telephone as themobile for voice based communication.

Thus, while SMS has generally been the realm ofteens and specifically teen girls, voice mobile tele-phony is in many respects the realm of middle agedmen. There is a certain irony here. It is often womenwho have the responsibility for the coordination offamilial life. This is in terms of the instrumentalaspects of daily activities (the “soccer mom” as por-trayed in the US experience). In addition there isoften an expressive dimension to women’s work inthe sense that they are often the person who is in con-tact with the extended family and carries out variousforms of care giving. Given this situation, one mightsuggest that the mobile telephone would be ideal as atool to carry out at least the instrumental portions ofthis work. However, it seems that they have notadopted the device to the same degree as men. Byway of explanation, men often have a job subsidizedmobile telephone and thus there is not the same eco-nomic barrier for men. At another level, the mobiletelephone is not seen as being the appropriate devicefor a longer telephone conversation. It is seen asbeing expensive, the device becomes warm duringlonger calls etc. Thus, care giving and the moreexpressive aspects of network maintenance – thatis those areas of activity that are often within thepurview of women – are better carried out via thelandline telephone.

Clearly this means that the average length of a callincreases as one goes from the younger age groups tothose that are older. The elderly female callers reportthe longest mean call length (about 12 minutes percall) while teen boys report the lowest mean (lessthan 2 minutes per call).

ISSN 0085-7130 © Telenor ASA 2004

Page 78: 100th Anniversary Issue: Perspectives in telecommunications

76 Telektronikk 3.2004

The use of SMS

There is a rough similarity when looking at the curvesdescribing the daily use of voice mobile telephonyand those describing the use of SMS (Figure 7 showsSMS use). In general the curves are low for the

youngest pre-teens. Those who are in the teen groupsand the young adult groups are at the top of the curveand finally those who are in the adult and the elderlygroups report far lower use. We have seen this samegeneral pattern when considering the use of mobile

0

5

10

15

20

25

30

9 - 12 13 - 15 16 - 19 20 - 24 25 - 34 35 - 44 45 - 54 55 - 67 over 67

Age groups

Mean minutes per day

Males fast

Males mobile

Females fast

Females mobile

Figure 7 Percent using SMS on a daily basis by age and gender, Norway 2002 – 2003

Figure 6 Minutes per day for all telephony and mobile by age and gender, Norway 2003

0

20

40

60

80

100

9 - 12 13 - 15 16 - 19 20 - 24 25 - 34 35 - 44 45 - 54 55 - 64 over 64

Age group

Percent who use daily

Men 2003

Women 2003

Men 2002

Women 2002

ISSN 0085-7130 © Telenor ASA 2004

Page 79: 100th Anniversary Issue: Perspectives in telecommunications

77Telektronikk 3.2004

telephony for calling. It is also apparent here. Abouthalf of the pre-teens with a mobile telephone reportedsending SMS messages on a daily basis. This rose to90 % for those in the late teen and early adult groupsand then fell gradually to around 30 % of those with amobile telephone in the oldest age group.

It is interesting to note that women report somewhathigher rates of use2). While 72 % of the men withmobile telephones reported using SMS on a dailybasis, 81 % of the women reported the same. Thedata shows that for almost all age categories womenwere more inclined to use SMS than men. It is onlyamong the oldest groups that there is parity.

There is also data on the daily use of SMS in 2002contained in Figure 7. One can see that there has beena transition between 2002 and 2003 in this regard.Where in 2002 SMS was largely a teen and youngadult form of interaction, older users have started touse this form of communication to a greater degree.This point will be further developed below.

What has not changed

General time used on the phone

One of the first things that seems to be quite stablesince the late 1990s is the general amount of time

we use for calling (see Figure 8). In 1998 the meantime respondents reported spending on the telephonewas 23.8 minutes per day. In 2003 it was 24.3. Thesame is generally true with regard to the reported useof mobile telephony for voice calls. The respondentswho had a mobile telephone in 1999 reported using7.1 minutes per day while in 2003 it was 8.7 minutes.3)

The age based profiles have also been rather stable(see Figure 9). The youngest respondents have notused much time on the telephone. However as theyprogress through the teen years their usage more thantriples. For the youngest users the mean daily time onthe telephone is about 10 minutes. For those in theirearly 20s it is over 30 minutes. This change is themost dramatic shift in use for all age groups. There isno other life transition that witnesses the same radicalchange in telephonic behavior. Clearly this changetakes place against the backdrop of the child’s eman-cipation from their parents. The telephone is thatrealm where many of the aspects of teen life findplace. Gossip is exchanged; romances are established,maintained and ended. Social activities are arrangedand teen identity is developed. As one moves awayfrom their parents’ home and into a more or lessnomadic young adult period, the telephone continuesto play a central role. During this period in one’s lifemany of the same social affairs need to be arrangedtelephonically as in the case of teens.

2) Chi2 (1) = 9.9, sig. = 0.0023) It needs to be noted that there were more mobile telephone users in 2003 than in 1999.

Figure 8 The mean minutes per day on the telephone

0

10

20

30

40

50

60

70

80

90

Age

Percent that is mobile

1999

2000

2001

2002

2003

Mean

10 15 20 25 30 35 40 45 50 55 60 65 70 75 80

ISSN 0085-7130 © Telenor ASA 2004

Page 80: 100th Anniversary Issue: Perspectives in telecommunications

78 Telektronikk 3.2004

As noted the peak time of use comes when one isin their mid 20s. From that point the respondentsreported a decreasing amount of time on the tele-phone. This long slow decline continues until approx-imately the point at which the retirement periodbegins. The summary data from 1998 to 2003 showsa slight rise in reported use among the oldest respon-dents.

It is interesting to also look at the age groups whochoose to use mobile and landline telephony. Youngadults reported using more time with mobile thanlandline telephony. By way of contrast, the elderlyusers reported almost no time on mobile telephony,all the while maintaining a relatively high level oflandline telephony use.

The percent of all telephony time that is spent on themobile telephone also shifts for different age groups.In reality it is wrong to discuss this issue in a chapterexamining stable elements in the use of telephonysince it has increased with time. In 1999 approxi-mately 30 % of all telephone time – that is all time intelephone conversations – was spent on the mobiletelephone. In 2002 this had risen to 41 %. The year2003 saw a drop to 36 %. Thus, while the total timespent on the telephone has remained about the same,it seems that mobile telephony has nudged its wayinto the total time budget here.

When looking at this same material across the agegroups one sees that it is particular groups that have

adopted the use of the mobile telephone. Here onesees that it is the young adults and middle aged indi-viduals that are the most active voice mobile users.The young adults report that more than half the timeused on the telephone is spent talking on mobile tele-phones. The other group is middle-aged telephoneusers. This is the group that most often has subsidizedaccess to mobile telephony via their job. Thus, onemight expect that the increased use is a consequenceof this.

Those outside the revolution

We have been interested in following the adoption ofmobile telephones. Early in the adoption process themobile telephone was often seen as part of a family’sholdings of equipment. It was used by that individualwho may have had use for it at a particular moment.Children often had to, in effect, show a stronger needthan adults since there is often a moral barrier to achild’s use of the device. As time has gone, a moreindividualized relationship to the device has arisen.Instead of being a commonly held item in the home,it is an individual’s. Indeed, many of the services arefocused on individual consumption. As noted above,the catalog of names, sent and received SMS mes-sages, various personalizations such as the ringingsound, logos and covers, and even the specific model– are all individualized. Even the fact that the PINcodes are commonly used underscores the sense thatthe mobile telephone is an individualized device.4)

Figure 9 Percent of calls that are made on the telephone

0

5

10

15

20

25

30

35

9

Age

Mean minutes per day

15 20 25 30 35 40 45 50 55 60 65 70 75 80

All

Landline

Mobile

4) It is interesting to note that concrete ownership is often a slippery concept. In many cases the de jure and the de facto owner of themobile telephone can be different. Many people who work in larger organizations report owning a mobile telephone that has actuallybeen purchased by their employer.

ISSN 0085-7130 © Telenor ASA 2004

Page 81: 100th Anniversary Issue: Perspectives in telecommunications

79Telektronikk 3.2004

Thus, as the market develops it seems that one movesfrom a situation in which the mobile telephone is seenas a type of common property to a situation where itbelongs to a single individual.

As we will see below, generally most people between15 and early 70s either have a mobile telephone, orthey have access to one. The two major groups whodo not have one are the very young and the very old.Data shows that the elderly are only slowly adoptingthe mobile telephone. Part of this may be due to theparsimonious ethic of the pre-WWII generation. Asa cohort that experienced both the depression and thedepravation of the Second World War, they oftencarry an attitude of careful consumption. This is incontrast to the post WWII “baby boom” children whoperhaps carry an ethic of freer consumption and alsowere often exposed to more electronic developmentsin their working lives. As the generation of pre-warpersons moves from the scene and is replaced bythose born after the WWII, one can suggest that therewill be a more open attitude towards the ownershipof the mobile telephone.

It is unlikely that young children will have mobiletelephones unless they become some form of ambientdevice. The data shows that only about 10 – 15 % ofthe youngest children have a mobile telephone oftheir own.5) This has been quite stable for the last fewyears. There are some who receive one in the case oftheir parents’ divorce. In this way, the non-residentparent has a direct line of contact with the child anddoes not need to go through the perhaps hostile filterof their ex in order to speak with the child. Beyondthis group, there is little need for younger children toown a mobile telephone. Their social world is oftenquite close by and the responsibility of owning andmaintaining a device may be beyond them.

Gendered use of the telephone

As noted above men make more calls per day thanwomen, particularly when considering mobile tele-phony. In almost all age groups this is the case. It isparticularly obvious when considering the use byyoung adults and in the case of middle aged persons.By way of contrast women talk longer on the tele-phone. Aside from the situation of teens and their useof the mobile telephone, this is most often the casefor when considering the landline phone and its useby adult women. Finally SMS seems to be more of afemale than a male arena.

These patterns, and in particular those associated withvoice telephony seem to play on more traditional

relationships to the technology. Where men seem-ingly have a large number of shorter interactionswhile ostensibly moving about, the data points to thenotion that women have fewer, but longer telephoneconversations from a fixed location. A simplificationmay be the quick, spontaneous call vs. the goodfriendly chat on the phone. These two approaches tovoice telephony point, perhaps, to the socializationinto different social roles. Where the spontaneous callmay be an instrumental affair in order to quickly takecare of some arising issue, the long conversation maypoint to the expressive maintenance of social ties. Itis incorrect to say that it is all one way or another, butthe data here seems to point to some of the broaderissues that have been described by others with regardto the gendering of communication (Tannen 1991).

What is likely to soon changeThe current situation indicates that the mobile tele-phone is overwhelmingly being used for voice inter-action and to send SMS messages. These are the mainareas of income for the mobile network operators.However, there are other areas where mobile commu-nication may become more common. These includeenhanced forms of interaction (MMS, push to talkand chatting), and other forms of data transmission.

MMS, that is the transmission of messages that caninclude text, pictures and sound, is a new form ofinteraction that is now on the market. In manyrespects this is a further development of SMS mes-saging in that it is relatively speaking asynchronous.It may come to occupy a somewhat similar positionin the public imagination. It may become a way forindividuals to share their daily experiences with oth-ers in their intimate sphere. Another function may beto enhance the communication between, for example,tradespersons who need to specify with relativelyhigh degree of precision the situation at a work site.The MMS photo in Figure 11, for example, was sentby a carpenter to an architect in order to confirm thatthe conception and the execution of a particular planwere the same. Thus, we see that there are bothexpressive and, perhaps more importantly, instrumen-tal issues that are at play in the eventual adoption anduse of MMS. One can also speculate that, just as withthe evolution of email, SMS and MMS will eventu-ally merge into a single type of service. Along thesame lines, one can imagine that multiparty chatrooms may develop in the mobile realm.

Another functionality that has already arrived on thescene is the so-called “push-to-talk” service. This is

5) A much larger group has various forms of access to the device.

ISSN 0085-7130 © Telenor ASA 2004

Page 82: 100th Anniversary Issue: Perspectives in telecommunications

80 Telektronikk 3.2004

a form of IP telephony that is simplex, as opposed tothe traditional duplex of the telephone world. That is,the “channel” is only available to one person at atime. Thus, users often have to adopt the “walkie-talkie” traditions of saying “roger” and “over” etc.The advantage is that more than two persons cansimultaneously participate in a session. In addition,it is a relatively efficient service when seen from theperspective of system resources. Finally, it is not lim-ited to only local use, as with traditional radio. Rathera push-to-talk session can include persons frombroadly different geographic locations. In practicalterms, groups that are in need of close coordinationcan use services like this. Families at shopping cen-ters and working crews at building sites come imme-diately to mind as potential users.

Social consequences of mobiletelephonyUp to this point I have focused on the number ofphones, the length of conversations etc. Another issueis the actual impact of mobile telephony on our lives.It seemingly satisfies certain needs and at the sametime can lead to difficult and unexpected situations.On the one hand it is wonderful to get a messagefrom a child while we sit on the bus, it is good tocoordinate (or perhaps micro-coordinate) a meeting,or it is good to call ahead to our family or friends andlet them know that because of traffic we will be latein getting to dinner. At the same time, it is perhapsembarrassing to hear it ringing when we forgot toturn off the sound during that critical sales meeting.

We are in the process of working out our relationshipto the mobile telephone. This is clearly not a staticthing. As new situations arise we use the device innew and unexpected ways. New features give us newpossibilities. At the same time we have to work out

how and when to use the device. We are bending thedevice to our purposes, and we are reformulating ourlives around the possibilities that it provides us. Weare domesticating it (Ling 2004).

A short list of the mobile telephone’s social conse-quences indicates that the device provides us with anenhanced ability to coordinate our activities with oth-ers, it gives us a sense of safety and it has become anelement in teens’ and parents’ negotiations overemancipation.

Taking these in order, the greatest contribution of themobile telephony will be its contribution to coordina-tion. In a modern society, we have experienced therapid growth of the cities and the adoption of rapid,individualized transportation in the form of the auto-mobile. We know from many studies both in Norwayand in other countries such as France, Germany,Japan and Korea that the bulk of traditional landlinetelephone calls are primarily used to for instrumentalpurposes, in other words for coordination. We use thetelephone to make and confirm appointment, organizeschedules and arrange the delivery and recovery ofchildren at various events.

Mobile telephony expands these possibilities. Withthe mobile telephone we can plan – and re-plan –activities anywhere and at any time. This can be donefar more conveniently than with the traditional tele-phone since we are, in effect, calling a person, andnot a geographical position. Thus, we do not needto be at a specific place, node or location to receiveinformation. This increases the efficiency of planningour everyday activities. If we get out of a meetingearly we can call our spouse and alter who will pickup the kids at day-care or shop for groceries. Whenin the grocery store we can again call to find out ifit was Swiss or Cheddar cheese that is needed fortonight’s dinner. These are mundane issues, but theyare significant in the way that they lubricate themachinations of everyday life. Indeed, the ability toquickly coordinate activities in a complex society isprobably the most significant contribution of themobile telephone.

Coordination is also a key use among tradespersons,sales people and other workers who spend their timeaway from an office. The ability to quickly exchangeinformation with colleagues, order materials andcheck details with the office facilitates their workprocess.

The ability to coordinate on a mobile basis has indeedbeen revolutionary for some portions of society. Agood example of this are deaf persons. Previous tomobile telephony, and specifically SMS, this group

Figure 11 An MMS photo taken by a carpenterand sent to an architect to check on the details ofa construction situation

ISSN 0085-7130 © Telenor ASA 2004

Page 83: 100th Anniversary Issue: Perspectives in telecommunications

81Telektronikk 3.2004

was reliant on a special telephone and translationservice in order to coordinate their activities and tocoordinate their daily needs. If, for example, an un-expected problem arose and there was the need tore-plan a meeting it was often impossible to assemblethe technology and competence to send out the mes-sage, particularly if people were en route. Text basedSMS has cut through this Gordian knot with a simpleand effective way to help this group coordinate theiractivities.

Looking now at safety the mobile telephone hasfound a niche in that it provides us with a sense ofsecurity. Mobile telephony offers persons withchronic conditions (and those who care for these peo-ple) a broader range of movement. The mobile tele-phone means that help is accessible should a problemarise. The same can be said for those suffering froman acute problem that can range from punctured tiresto life threatening situations. We need to take carehere however. It is important not to be lured into afalse sense of sanctuary. The nearest base station canbe out of range, the batteries can be empty and theelectronics can be vulnerable to moisture. In addition,use of the mobile phone in certain situations (mostparticularly while driving) has been shown to be dan-gerous. It is easy to think that we can clear up a fewquick tasks while driving. However, research showsus that driving and talking on the phone is a danger-ous combination.

Finally, as discussed above, one of the most surpris-ing aspect is its adoption and use by teens. The com-bination of pre-paid cards and easy access to mobiletelephones has meant that this group is among themost enthusiastic mobile telephone users. To putthis into context, one of the major tasks of teens is todevelop a sense of identity and to emancipate them-selves from their parents. The role of the parent is toprovide their children with the ballast they need forthis transition. The peer group is also an importantelement in that they provide the teen with a referencegroup and a milieu in which the individual can try outtheir nascent adult roles.

The mobile telephone is a perfect tool in this situa-tion, particularly when seen from the perspective

of the teen. It provides them with a communicationchannel over which they have control. It is free fromthe surveillance that parents and siblings can enforceover the traditional landline telephone. Beyond freeaccess, the terminal itself is an icon to freedom. Oneneed only look at the advertisements for the popularmobile telephone terminals. The terminal is also alocus for information on who is a part of the gangand a variety of SMS messages. Thus, the (relatively)unhindered access to friends as well as the ownershipof a device that serves as a type of “friendship cen-tral” is a powerful philter.

Teen girls are central in this development. While teenboys were the first to really adopt the device, one cansay that it is among teen girls that mobile telephonyhas found a robust form. Analysis shows that teengirls are more active users. They are also strong SMSusers. When comparing the types of messages teengirls send to those sent by boys we see that teen girlswrite longer messages. They include more informa-tion in their messages. They use more completegrammar and they seem to be more nurturing in theirmessages.

In sum, mobile telephony has found its place in Nor-way. It is still working out how, where and when itwill be used by various groups. That it will be used,however, is not in question.

BioliographyFischer, C. 1992. America calling: a social history ofthe telephone to 1940. Berkeley, CA, University ofCalifornia.

Haddon, L., Ed. 1997. Communications on the move:The Experience of Mobile Telephony in the 1990s.Farsta, Telia.

Ling, R. 2004. The Mobile Connection: The cellphone’s impact on society. San Francisco, MorganKaufmann.

Tannen, D. 1991. You just don’t understand: Menand women in conversation. London, Virago.

Rich Ling (50) is a sociologist at Telenor R&D. He received his Ph.D. in sociology from the University of Col-

orado, Boulder in his native US. Upon completion of his doctorate, he taught at the University of Wyoming in

Laramie before coming to Norway on a Marshall Foundation grant. Since that time he has worked at the

Resource Study Group and has been a partner in the consulting firm Ressurskonsult. Since starting work at

Telenor he has been active researching issues associated with new ICT and society. He has led projects in

Norway and participated in projects at the European level. Ling has published numerous articles.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 84: 100th Anniversary Issue: Perspectives in telecommunications

82 Telektronikk 3.2004

IntroductionIn 1993 I was asked to be the guest editor of an issueof Telektronikk. A few months earlier, the Mosaicweb browser had been released and it suddenlybecame possible to demonstrate the concepts that thearticles would describe: how to publish informationin Cyberspace. My editing job, which in the begin-ning mostly consisted of nagging human authors (asother guest editors of Telektronikk will have experi-enced), suddenly became a very practical challengeof getting all the bits in the right order for the Web.I am proud of the fact that Telektronikk 4.93 was oneof the first paper publications to become availableon the Web1), and our publishing efforts received an‘Honorable Mention’ at the first Web conference atCERN in 19942). Also, as I will recount below, thechallenges facing us in 1993 had some long-termimplications for the Web.

The challenges in 1993Preparing a document for publication on the Web in1994 was very similar to what it is today. You con-vert the textual content to HTML, adorn the textwith images, add hyperlinks to other documents, andfinally the document is served to browsers across theworld over the Web. However, in 1993 these proce-dures were not established and much of our efforts (Ireceived help from Norwegian Telecom’s MultiTorgresearch group) were spent experimenting with dif-ferent kinds of HTML markup, how to use images,and how to link the documents together.

In particular we were faced with two challenges. First,network connections at the time were much slower thanthey are now and browsers did not support progressiverendering as they do today. So, when we addedimages to the articles, they suddenly became slow toto load and the user experience suffered accordingly.Second, there was no established way to convey thegraphics design of Telektronikk from paper to Web.

The first challenge was solved by making smallversions of the large images. By placing a thumbnailgraphics (as they would later be referred to) on thepage instead of the large image, the page would loadquickly. By clicking on the small images, the largerversion would be downloaded and shown. This tech-nique is one of the standard tricks of the trade today,and Telektronikk was – to the best of my knowledge –the first to use it.

The second problem turned out to have no good solu-tion at the time. The paper version of Telektronikkwas professionally styled and we wanted to use thesame graphics design on the Web. However, HTMLwas designed to represent the content of the docu-ment and not its style. HTML was developed in a sci-entific environment where the content is more impor-tant than its presentation, and authors are encouragedto declare that some text is, say, a heading rather thanwhat font to use. Therefore, in order to capture thefonts and colors used in the paper version, the webversion resorted to using an image. That is, insteadof sending browsers the title “Telektronikk 4.93Cyberspace” as text, an image of the text was sentinstead. This is similar to how fax machines work,but on the Web this practice is frowned upon. Thereare several reasons for this. Text in images is onlyaccessible to those of us who can see, and not tospeech synthesizers. Also, it is impossible to searchfor text in images, and images generally take up morecapacity than text.

SolutionsIn 1994, I joined Tim Berners-Lee’s web project atCERN. Berners-Lee had written three of the specifi-cations that formed the columns of the Web: HTML,URL and HTTP. They describe, respectively, thecontent of the document, how to link one documentto another, and how to transfer the document from theserver to the client. What was lacking from the Web

How Telektronikk changed the Web

H Å K O N W I U M L I E

Håkon Wium Lie

is CTO of Opera

Software

Transferring Telektronikk onto the Web was not a straightforward task in 1993. The Mosaic browser

had been released just months earlier, and procedures for preparing documents for the Web were

not established. The problems experienced from this early work were directly used to propose better

methods in CERN’s Web project. Since then, the Web has firmly established itself as the electronic

publishing platform of choice, and we should expect that the web version of Telektronikk’s Cyberspace

edition from 1993 also will be readable in 2043.

1) See: http://people.opera.com/howcome/1993/telektronikk-4-932) See: http://botw.org/1994/awards/design.html

ISSN 0085-7130 © Telenor ASA 2004

Page 85: 100th Anniversary Issue: Perspectives in telecommunications

83Telektronikk 3.2004

in 1993 was a way to describe the presentation styleof documents – fonts, colors and layout. Could wedesign a solution that would solve the problem forTelektronikk and other web publications? In October1994 I proposed a solution: Cascading Style Sheets(CSS). Using CSS, authors could describe the presen-tation of their documents. The style sheet could beplaced in a separate file and many documents couldrefer to it. For example, one style sheet coulddescribe the look and feel of a web site, or perhapsall electronic editions of Telektronikk.

CSS was not an immediate success. Even if authorsliked the idea of using style sheets, the popularbrowsers at the time did not support style sheets.Three days after CSS had been published, a companycalled Netscape Communications announced theirfirst browser. The new program was a significantimprovement over Mosaic and users flocked to it.Naturally, the new browser did not support CSS.I spent the next few years of my life trying to con-vince browser vendors to add support for CSS.Netscape finally added support in version 4. Microsoftshowed eagerness when they started competing withNetscape, but the quality of their work was – to use apolite term – uneven. Only consistent pressure fromweb authors and users forced the big software makersto fix problems and make CSS usable.

Meanwhile, a small Norwegian browser companyhad started to make headlines on the web. OperaSoftware. Two other members of the MultiTorggroup, Geir Ivarsøy and Jon S von Tetzchner, hadseen the opportunity for a better browser and foundeda company to turn ideas into products. I admit to nothaving much faith in the venture in the beginning.Competing with Mosaic, Netscape and Microsoft –or Americans in general – is a challenge. Thankfully,they did not listen to me and went ahead with theirambitious plans. In 1998 it was time for Opera to addsupport for CSS. In three months, Geir Ivarsøy hadimplemented better support for CSS than Microsoftand Netscape had spent three years on. At that pointI was convinced that Opera would make it and joinedthe company.

Web futureMore than a decade has passed since Telektronikk4.93 was published. In that time, the Web haschanged the way humans access information. URLsare printed on all kinds of products and “google” hasbecome a verb, much like “xerox” did in a paper-based world. The browser has become the windowto a world of electronic information where electronicequivalents of newspapers, brochures, dictionaries,laws, application forms, and bulletin boards abound.

At a technical level, however, the Web has notchanged that much in ten years. Telektronikk 4.93was written in HTML and the web version is stillreadable in browsers of today. HTML, along withHTTP and URLs, are still basic building blocks of theWeb. CSS, JavaScript and a few other specificationshave been added to the list, but today the Web hasstabilized as a platform for publishing.

There are several reasons why the technical founda-tion of the Web is not developing quickly any more.First, there are hundreds of millions of browsers outthere that use the basic building blocks. Adding sup-port for a new specification requires that browsersare replaced and this is a major undertaking in 2004,much more so than in 1994. Second, the Web has asolid foundation which does not necessarily needmuch more functionality and which has performedremarkably well under the pressure of growth.

As a rule of thumb, in order for a technology toreplace an existing one it has to offer somethingbetter by an order of magnitude. I do not foreseeany new publishing technology stepping forward toreplace the Web in the foreseeable future. How longis the foreseeable future? I bet that the web version ofTelektronikk 4.93 will live for at least 50 years. Thatis, common computers in the year 2043 will still beable to read those web pages we authored in 2003.Anyone against?

Håkon Wium Lie (39) is a Web pioneer, having worked on the WWW project at CERN, the cradle of the Web.

He first suggested the concept of Cascading Style Sheets in 1994 and he later joined W3C (the World Wide

Web Consortium) to further strengthen the standards. In 1999, he was listed among Technology Review’s

Top 100 Innovators of the Next Century and has also been invited to the World Economic Forum as a Tech-

nology Pioneer. He is currently a member of the W3C’s Advisory Board, Technology Review’s “TR 100”, and

World Economic Forum’s “Technology Pioneers”.

Wium Lie holds a master’s degree in visual studies from MIT’s Media Laboratory, as well as undergraduate

degrees in computer science from West Georgia College and Østfold College, Norway.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 86: 100th Anniversary Issue: Perspectives in telecommunications

84 Telektronikk 3.2004

<!DOCTYPE html PUBLIC “-//W3C//DTD HTML 3.2//EN”><html><head><title>Telektronikk</title><style type=”text/css”>BODY {

background: rgb(87, 116, 139);font-family: “Arial”, “Helvetica”, sans-serif;

}

div, h1, h2, h3 {margin: 0;padding: 0;

}

div {width: 20em;text-align: right;float: left;

}

div h1 {color: rgb(200, 200, 230);float: right;background: black;padding: 0.3em 0.3em 0.7em;

}

div h1 span {color: rgb(255, 250, 255);background: black;padding: 0 0 0.1em 0;font-size: 1.3em;border-bottom: 0.15em solid rgb(200, 200, 230);

}

h1 {float: left;padding: 0.6em 0.3em 0.7em;font-size: 2em;

}

h1 span {color: rgb(200, 200, 230);

}

h2 { color: rgb(186, 255, 201);margin-right: 0.4em;font-size: 1.5em;

}</style></head><body><div><h1><span>Telektronikk</span></h1><h2>Cyberspace</h2></div><h1><span>4.93</span></h1></body></html>

The cover

In 1993 it was not possible to represent Telektronikk’s

cover (seen on the right) without resorting to an

image. Today, with the addition of CSS, it is possible.

The code fragment below encodes the Telektronikk

cover from 4.93 and works in common browsers:

ISSN 0085-7130 © Telenor ASA 2004

Page 87: 100th Anniversary Issue: Perspectives in telecommunications

85

The discovery of SvalbardTo understand Telenor’s interest in Svalbard, one hasto have some insight into the prevailing conditions onSvalbard during the first two decades of the last cen-tury. Svalbard, or Spitzbergen as it was then called,was a terra nullius (no man’s land). Nations who hadan interest in the area made a stake for themselves,annexing areas where they anticipated immediate orfuture financial gain.

Svalbard was discovered in 1596 and activity in thearea began only 15–20 years later. A Dutch expedi-tion consisting of two ships prepared for an expedi-tion north. The objective of the voyage was to find asailing route through the Northeast Passage. WillemBarentsz (1549–97) was on board one of the ships.Barentsz was not the captain, but the navigator, per-haps the most important position on the ship for suchan undertaking.

On 17 June 1596 the crew saw land at 80 degreesnorth. They saw a landscape of tall, pointed moun-tains. For this reason they called it Spitzbergen,which means pointed mountain. A few days beforethat, around June 9 or 10, they also visited Bjørnøya(Bear Island).

Looking at a map of this northern area today, the nat-ural question is: “What on earth was Barentsz doingnear Spitzbergen when his goal was to find the North-east Passage?” The answer is an easy one. There wasno GPS or other navigational tools that were accurateenough to allow the crew to “blindly” sail to theirdestination. Navigation in this era was complicatedand depended on trial and error.

Nonetheless we can be pleased that Barentsz got lost.Because of his inaccurate navigation, we now knowwhen Svalbard was discovered and who the discov-

erer was. Unfortunately, Barentsz did not make itback to the Netherlands, his home country. He diedon this expedition after his ship was packed down byice at Novaya Semlya later that year. The crew man-aged to get ashore, but several crewmembers diedfrom the exertion.

Many historians believe they have reason to arguethat Svalbard was discovered long before Barentsz’arrival in 1596. Some argue that Icelandic Vikingsarrived on Svalbard as early as the end of the 11thcentury. Others believe that Russian Pomors huntedand fished on and around Svalbard in the 14th cen-tury. Still others maintain that there were settlementson Svalbard as early as the Stone Age. Because thereis so little evidence, these theories are difficult toconfirm. Therefore, until proved wrong, it is correctto assume 1596 and Willem Barentsz. In other words,we can state that the discovery of Svalbard was rela-tively recent, only approximately 100 years after thediscovery of America.

What were the consequences of thediscovery of Svalbard?Barentsz and his crew did not merely see ice, snowand pointed mountains in this arctic area. They alsosaw a large number of sea mammals such as whales,walrus and seals; in other words, considerable re-sources that could be exploited. It was not long beforeinterest in these resources was awakened in Europe.

Englishmen began hunting for walrus in the areaaround Svalbard as early as 1605. Later, Dutch,French, Danish and Basques came as well, and gradu-ally the hunt also included polar bears and reindeer.When whaling became popular these activities reallytook off.

From radiotelegraphy to fibre technologyTelenor’s history and development on Svalbard

V I G G O B J A R N E K R I S T I A N S E N

Viggo Bjarne

Kristiansen,

now retired,

was Managing

Director for

Telenor Svalbard

Telektronikk 3.2004

In a few years Telenor1) will celebrate the centenary of their presence on Svalbard. Of course it is not

possible to actually quote the people who were central in the pioneering work carried out in this arctic

area in 1911. However there is some material, though very little written material exists, which can

provide us with insight into the background for this brave project which was not only important for

Telenor, but also had national and international political ramifications.

1) Telenor has changed its name several times. Up until 1969 it was called Telegrafverket, then Televerket, and from 1995 the name hasbeen Telenor. In order to avoid unduly confusing the readers of this article, the name Telenor will be used regardless of which namewas actually used at the specific dates in the article.

ISSN 0085-7130 © Telenor ASA 2004

Page 88: 100th Anniversary Issue: Perspectives in telecommunications

86 Telektronikk 3.2004

Svalbard is the name of an archipelago lying between 74 and 81 degrees north and between 10 and 35 degrees east. The landarea is approximately 63,000 square kilometres. Two-thirds of this is covered by ice and snow. The largest island of Svalbard isSpitzbergen. Other islands are Bjørnøya, Hopen, Egdeøya, Nordaustlandet, Prins Karls forland, Barentsøya, as well as somesmaller islands

ISSN 0085-7130 © Telenor ASA 2004

Page 89: 100th Anniversary Issue: Perspectives in telecommunications

87Telektronikk 3.2004

The hunt for whales and walrus went on until the mid16th century. In spite of the enormous number of ani-mals, the hunt resulted in these species being almostwiped out and the vessels had to sail further out to seaalong the icy region near Greenland.

After the heyday of whaling, Greenland whales werestill hunted, but the seal hunt became the main occu-pation for the next two hundred years. However, theGreenland whales were also practically extinct, andhunting for land animals such as polar bears, reindeerand foxes increased. It was at this time that huntersbegan to stay on Svalbard through the winter. Thehunt for polar bears continued unabated up until thisspecies became protected as late as 1972.

Basically the hunt was uncritical and went for ‘every-thing that moved’. There was no regulation, and as aconsequence, several species were at the point ofbecoming extinct on Svalbard. The only reason thatthis greedy practice could continue was that no coun-try was responsible for regulating the exploitation ofresources. After all, Svalbard was no man’s land.

Around 1900, industrialisation began on Svalbard.This era commenced when large coal deposits werefound, enticing many people to travel to Svalbard toextract this sought-after resource. Various mineralswere also found, among them gypsum, lead, iron,marble and asbestos. These discoveries led to many

companies and individuals travelling to Svalbard toextract and exploit these resources, often with themotive of making a quick profit.

Since the area did not formally belong to anothercountry, those sufficiently interested simply annexedthe areas they were interested in. Few of those com-ing to Svalbard planning on earning money byexploiting the island’s resources had much luck.Many started, but only a few actually succeeded.

Among the first to start serious commercial coaloperations from Svalbard was John MunroeLongyear, a rich American businessman. He begancoal operations in what is today called Longyearbyen.The place bearing his name was then called LongyearCity (‘byen’ is the Norwegian word for city [transla-tor’s remark]). The name of his company was TheArctic Coal Company (ACC). The company startedoperations in 1906, and it would turn out that initia-tives by this company provided the basis forTelenor’s first establishment on Svalbard.

Telecommunications in no man’slandCoal was a product which was sought after all overthe world, and if ACC were to sell their coal, theywere dependent on contact with the world beyondSvalbard. The only contact with the mainland was by

Longyear City (now Longyearbyen) as it used to look when The Arctic Coal Company ran its operations herebetween 1906 and 1916

ISSN 0085-7130 © Telenor ASA 2004

Page 90: 100th Anniversary Issue: Perspectives in telecommunications

88 Telektronikk 3.2004

boat. A trip to Svalbard from the mainland usuallytook more than two days each way.

Radiotelegraphy had come into use many placesaround the world, even some places in Norway. JohnMunroe Longyear was certain that this had to be thesolution. So in 1910 he contacted the Norwegianauthorities and launched a proposal that ACC wouldbuild a radiotelegraphy station in Longyearbyen. Butin order for it to operate, a station had to be built inNorway that could communicate with ACC’s stationon Svalbard. Of course, Munroe Longyear believedthat constructing this station was unproblematic, asthe area was in the possession of ACC. Whatremained was only to convince the Norwegianauthorities to build the station in Norway.

An alternative to Norway funding a station on themainland was that ACC be permitted to build thisstation as well. For Munroe Longyear the matter wassimple and straightforward. But it turned out to bemore complicated than he had anticipated.

The Norwegian authorities used all possible means toconsolidate their position on Svalbard, and they nowregarded this as an opportunity to strengthen theirpresence. This meant however that Norway wouldbuild both stations; the one on Svalbard and the oneon the mainland. This gradually became the Norwe-gian consensus, but only after the matter was politi-cally evaluated nationally and internationally.

Thomas Th. Heftye, the telegraph director at thattime, was a keen politician with an interest in radio-telegraphy, which was the new telecommunicationtechnology at the time. Heftye’s attitude and role in

this matter was of crucial importance for the resolu-tion of the Storting, the Norwegian National Assem-bly, to grant NOK 300,000 to the construction of astation on Svalbard and a station on the coast of Finn-mark. The decision, which was made at the beginningof May 1911, aroused some irritation among the lead-ers of ACC who had taken it for granted that the sta-tion on Svalbard would be built by them. In addition,several nations were interested in Svalbard and ex-pressed their dissatisfaction on Norway’s position onthis matter.

Nor was the chosen location on Svalbard that whichACC had expected, it was not to be Longyearbyen,but Finneset in Green Harbour (today called Grøn-fjorden, close to Barentsburg). Here, Norway alreadyhad interests in a whaling station, and it was clear thatthe Norwegian authorities wanted to build up a Nor-wegian “colony” in this area, despite the fact that thesite, due to its location, was not particularly suitablefor radio communication. The station on the mainlandwas located at Ingøy on the coast of Finnmark.

Construction of the Svalbard station started at lateJune 1911 and the station was formally opened on22 November the same year. An impressive amountof work was carried out during this short time span:several buildings were built, mast foundationsinstalled and antennae were mounted and tested onthe station. Hermod Petersen, department engineer,was in charge of its execution and was also the sta-tion’s first station manager with responsibility for thestation from the autumn of 1911 to the summer of1912. In its first year of operation, the station had acrew of six men, including the station manager. Thestation was named Spitzbergen radio, a name that was

Spitzbergen radio at Finneset in Grønfjorden was built and put into operation in 1911. The station was activeuntil 1930 when it was moved to Longyearbyen

ISSN 0085-7130 © Telenor ASA 2004

Page 91: 100th Anniversary Issue: Perspectives in telecommunications

89Telektronikk 3.2004

kept until 1925 when Norway assumed sovereigntyover Svalbard. The name was then changed to Sval-bard radio, which it still has today.

The Arctic Coal Company was forced to use Spitz-bergen radio until 1912, when they were able to con-struct their own station in Longyearbyen. ACC’sstation communicated with Spitzbergen radio whichimparted ACC’s traffic to be dispatched out of Sval-bard. Until ACC had their own station in operation,telegrams were delivered to and picked up fromSpitzbergen radio by couriers from ACC.

After Spitzbergen radio came on the air, several smallprivate stations opened around the islands of Sval-bard. One of the most important stations was KingsBay radio, later called Ny-Ålesund radio. This stationwas made operational in 1918 and became a vital linkduring Roald Amundsen’s attempt to reach the NorthPole by seaplane in May 1925, and during his expedi-tion on the airship “Norge” from Ny-Ålesund, via theNorth Pole to Alaska in May 1926. During UmbertoNobile’s attempt to reach the North Pole on the air-ship “Italia” in 1928, the Ny-Ålesund station assistedwith the radio traffic to and from the airship. As most

people know, this voyage ended tragically, and Ny-Ålesund and Svalbard radio stations were importantparts in the search for the airship and her crew. RoaldAmundsen also participated in the search party,which ended tragically as his airplane lost control andeveryone aboard died, including himself. Neither theairplane nor the crew were ever found. It is thoughtthat the airplane crashed into the sea somewherebetween Tromsø and Bjørnøya.

Radiotelegraphy – the solution ofthe timeRadiotelegraphy was first used around 1900. Thefirst station for traffic between Europe and USA wasopened in 1907, but already a year earlier Norwaybegan using radiotelegraphy between the stations atSørvågen and Røst. This was the second connectionin the world that was used in the public telecommuni-cation network. Interest in this communications solu-tion was huge all over Europe, just as in Norwaywhere radiotelegraphy was seen as a good solutionfor covering the long coastline that otherwise wouldbe very expensive to cover by use of physical com-munications. It was only natural that people inhabit-ing the coastal areas demanded that this solution beimplemented. It was stressed that building land sta-tions would provide greater safety for the fishing fleetwhen they could acquire radio equipment on boardtheir boats.

As already mentioned Heftye, the telegraph director,was strongly in favour of this form of telecommu-nications, but he knew that the financial situation,which he was responsible for, would not permit theconstruction of radio stations to the extent he felt wasnecessary. The state coffers were also practicallyempty. So it was seen as somewhat of a sensationwhen the Storting permitted the allocation of NOK300,000 to construct these two stations on Svalbardand Ingøy respectively. There was no way theseinvestments could be justified by anticipated rev-enues. For this reason it was apparent that there wereother motives than economic ones which were behindthe decision of the National Assembly. It was clearthat this was a Norwegian move towards acquiring astronger foothold on Svalbard.

Spitzbergen radio – a Norwegianmeeting place on SvalbardTelenor was the first public enterprise to set up per-manent operations on Svalbard. Actually, the PostOffice was there first, but the postal service was amore seasonal operation. Incidentally, the Post Officewas located at Spitzbergen radio when this wasopened.

Thomas Thomassen Heftye was the telegraph directorfrom 1905 until he died in a train accident in 1921

Pho

to:

Nor

weg

ian

Tel

ecom

Mus

eum

ISSN 0085-7130 © Telenor ASA 2004

Page 92: 100th Anniversary Issue: Perspectives in telecommunications

90 Telektronikk 3.2004

Spitzbergen radio soon became a meeting place forNorwegians. This was not merely to search for com-munications with the mainland, but also to hear newsfrom all over the world. And besides, all the visitshelped the radio station acquire a reputation as some-thing of a social gathering place.

Many people visited the radio station at Finneset dur-ing its operative years. Among the most famous wasFridtjof Nansen, who visited the station twice duringhis oceanic expedition to Svalbard in 1912. In hisbook A journey to Spitzbergen he wrote about hisvisit to Spitzbergen radio. He gave the station crew afine testimony and particularly mentioned the hospi-tality and assistance he received. Among other thingshe was able to adjust his chronometer with the aid ofa signal which was sent from Paris every evening atmidnight.

Thomas Heftye, the telegraph director, had alreadyvisited the station in July 1911, while the station wasunder construction. During his visit he insisted thatthe Norwegian flag should be flying at the station. Hehad his way and the flag was hoisted on 23 July 1911.There is reason to believe that this was the first timethat the Norwegian swallow-tailed flag was flying onSvalbard. Svalbard was still no man’s land, and thisevent did not go unnoticed, but brought reactionswithin Norway and from countries with an interestin the events on Svalbard.

When the first Governor was appointed in 1925, therewas no office or house for him on Svalbard. Insteadhe carried out his duties from Spitzbergen radio,where he lived and worked during this initial period.

Hunters and fishermen with hunting stations aroundSvalbard could now make their way to Spitzbergenradio to contact their employers on the mainland.From the radio station they could provide informationabout hunting and fishing conditions and not leastmake arrangements for when they wanted to bebrought home.

In many ways Spitzbergen radio contributed to creat-ing a new and better existence for the people livingon Svalbard.

On the moveThe Arctic Coal Company sold its operation in Long-yearbyen to the ‘Store Norske Spitsbergen Kulkom-pani’ in 1916. In this way Longyearbyen was “onNorwegian hands”. International work was underwayto determine who would have sovereignty over Sval-bard.

On 9 February 1920 the Svalbard treaty was signed inParis. Nine countries were behind the treaty, althoughit did permit the inclusion of others, and today morethan 40 countries have joined. The treaty gives Nor-way full and unrestricted sovereignty over Svalbard,but gives all citizens of the signatory states equalrights for hunting, fishing and business enterprise onSvalbard. Norway took over the formal sovereigntyof Svalbard on 14 August 1925. This date is cele-brated as Svalbard’s “national holiday” and is anofficial flag-flying day.

Following this, in 1928, it was resolved that Spitsber-gen radio would be moved from Finneset to Long-yearbyen. The station was now named Svalbardradio. The decision about the location was laterchanged, and it was decided that the station wouldbe moved to Ny-Ålesund.

Ny-Ålesund was then to be the new location of thestation and work to move the station began in 1929,but following a major mining accident in Ny-Ålesundon 16 August 1929, the Kings Bay Kulkompanidecided to shut down mining operations at Ny-Åle-sund. As a result of this, the resolution to move thestation was once again changed, and the new locationwould be Longyearbyen. Svalbard radio moved toLongyearbyen in 1930 and is still there today. Thatsame year, the station at Finneset was shut down after19 years of service. Radiotelegraphy was still the rul-ing communications solution and would remain sofor many years to come.

Isfjord radio is builtThe location of Svalbard radio between high moun-tains at Longyearbyen did not provide optimal send-ing and reception conditions, and in particular, boattraffic complained about the conditions. It was there-fore decided that a radio station and a lighthousewould be built at Cape Linné, at the mouth of theIsfjord estuary. The Norwegian Polar Institute wasresponsible for building the station which was com-pleted in 1933. The topographical conditions of thisarea with an open south-facing horizon were consid-erably better and connection quality was noticeablyhigher with this station. In particular boat traffic hada considerably improved coverage between Svalbardand the mainland, as well as in the area around theislands.

Svalbard radio was still active, however, and trafficfrom this station now went via Isfjord radio as thestation was called. During its first year Isfjord radiowas run by a crew of three: the station manager, thetelegraph operating assistant and a cook. Later on, anengineer and a handyman joined their ranks.

ISSN 0085-7130 © Telenor ASA 2004

Page 93: 100th Anniversary Issue: Perspectives in telecommunications

91Telektronikk 3.2004

Svalbard radio in Longyearbyen and Isfjord radio atCape Linné were actively operating radiotelegraphyuntil after the outbreak of World War II.

The war years at SvalbardSpitzbergen radio was active throughout World WarI, from 1914-1918. The station was instructed to keepa close eye on all shipping activities and in particularto notify in the event of anything suspicious. In addi-tion the station was responsible for reporting weatherdata and data about ice conditions around Svalbard.Except for this, life went on as normal and there wasno military action on the archipelago.

However, Svalbard did become involved in WorldWar II, 1940 – 1945, which led to the evacuation ofinhabitants. On 3 September 1941 evacuation wascarried out but prior to this, Svalbard radio andIsfjord radio were destroyed so that they could notfall into enemy hands.

In September 1943 Barentsburg and Longyearbyenwere bombarded and set on fire by a German fleetconsisting of two battleships and nine torpedo boats.When the fleet drew back through Isfjorden, theyfired a few volleys at Isfjord radio in spite of the factthat the station had long been destroyed.

The Germans’ interest in Svalbard was primarily toget a foothold there in order to be better equipped tocut off allied sea traffic to and from Arkhangelsk. Inaddition they also wanted to get their hands on thecoal reserves on Svalbard. Industries in Germany,particularly the weapons industry, were running atfull capacity and coal was a sought-after source ofenergy.

An efficient local home guard on Svalbard helped toprevent the Germans from succeeding. Several people

lost their lives on Svalbard as a consequence of thefighting. There is a monument not far from the tele-graph building in Longyearbyen commemorating this.

Back after World War IIWhen the war ended in May 1945, work started torebuild Svalbard. No one was opposed to resumingcoal mining activities, nor to rebuilding and improv-ing the infrastructure, including telecommunications.Svalbard radio started up again in Longyearbyen andwas in operation from the autumn of 1945. Isfjordradio was rebuilt on Cape Linné and was operativefrom the autumn of 1946. The station building at Isfjordradio was built identical to the original building.

Development - slow but sureRadiotelegraphy was still the ruling communicationssolution, but in March 1949 Svalbard received its firsttelephony connection to the mainland. A connectionwas established between Longyearbyen and Harstadradio. Thus a small step had been taken in a new erain telecommunications history on Svalbard.

In addition to the tasks of operating the station andattending to the radio station, the crew at Isfjord radiowere assigned the task of making meteorological ob-servations for the Norwegian Meteorological Institute.

In 1954 a new telecommunications building with atechnical and administrative wing and a wing foremployees was built and put into use in Longyear-byen. Svalbard radio, which had been co-locatedwith the District Governor, now moved into its ownpremises in the new telecommunications building.In 1957 a new station building was erected at Isfjordradio. The old building was kept and is still standingin the same spot next to the “new station”.

Radio and TV conditions

Reception and broadcasting conditions on Svalbardwere rather poor. After the war an antennae severalkilometres long was built, running from Longyear-byen up to Platåfjellet, which improved conditionssomewhat. Telenor had the technical responsibilityfor radio and TV transmissions to Svalbard and in1969 a local ‘studio’ was set up in the telecommuni-cations building in Longyearbyen. From this studio,TV programmes were relayed to receivers inLongyearbyen. The unusual thing about this was thatthe programmes were recorded on the mainland andthe tapes were sent to Longyearbyen by plane, andthen they were aired two weeks after being shown onthe mainland. The same programme was sent twiceper day, first in the morning and then again in theevening, so that shift workers in the mines would be

Isfjord radio was built in 1933. It was destroyedduring World War II, but reconstructed in 1946. Thepicture shows how the station looked before the newstation building came in 1957

ISSN 0085-7130 © Telenor ASA 2004

Page 94: 100th Anniversary Issue: Perspectives in telecommunications

92 Telektronikk 3.2004

able to see the programmes. This remained so until1980 when programmes were sent with “only” oneweek’s delay. It was only several years after satelliteconnections were established between Svalbard andthe mainland that viewers in Longyearbyen could seelive programmes. This took place on 22 December1984 – a nice Christmas present for the inhabitantsof Longyearbyen, in other words.

New possibilities with satellite

In 1974 work began to introduce satellite communi-cation with Svalbard. Telenor’s research and develop-ment department initiated testing, while there weremany sceptics as to this being at all possible. In the-ory this was on the borderline of what is possiblebecause Svalbard is so far north that elevation wouldbe too low.

First of all it was important to find a suitable place forreceiving signals from the satellite. Of all the placesthat were investigated, it turned out that Isfjord radio,where Telenor already was established, was the mostsuitable place. Isfjord radio lies on the mouth of theIsfjorden and has an open horizon to the south. Ahigher elevation of the site would have been desir-able, but nonetheless this was the place chosen fortesting. Another advantage with Isfjord radio was thatthe place already had the infrastructure required, suchas buildings and electrical power. Testing took place

over the course of several years, but in 1977, basedon the results so far achieved, it was decided to builda satellite station at Isfjord radio. Construction started,and to make a long story short, satellite connectionwith the mainland was put into operation on 12 De-cember 1978. This revolutionised telecommunica-tions on Svalbard.

Developments speed upFrom this point on many possibilities lay open andthe first suggestions for automating telecommunica-tions were launched. Only a little over two years wereto pass before this became a reality. The fact that itnonetheless took as long as two years was due to thelocal subscription network in Longyearbyen beingowned by Store Norske Spitsbergen Kulkompani andhad to be taken over by Telenor in order for theautomation to take place. In addition the subscriptionnetwork was in such poor condition that it needed aconsiderable upgrade. It was also necessary to expandthe technical part of the telecommunications buildingin Longyearbyen.

After the formalities concerning the takeover of thesubscription network from Store Norske SpitsbergenKulkompani were arranged, the subscription networkhad been improved and the technical part of thetelecommunications building in Longyearbyen wasextended, the foundation was in place for automatingtelephones on Svalbard. At the same time radio con-nections were established between Longyearbyen andNy-Ålesund via a radio link station at Kongsvegpas-set. In this way the telephones in Ny-Ålesund couldbe automated at the same time as in Longyearbyen.

On 20 May 1981 Svalbard was connected to the inter-national long-distance network. At around this timeSvalbard became a separate telecommunications area,the 28th such area. By establishing the telecommuni-cations area, Svalbard had its own telecommunica-tions director with a separate administration in keep-ing with other telecommunications areas on the main-land.

So a big step had been taken into the world of moderntelecommunications, but many unsolved tasksremained. Internal communications on Svalbard stillhad some defects. Sveagruva (the Svea mine), whereStore Norske Spitsbergen Kulkompani had consider-able activities, still did not have acceptable telephoneconnections. The connection to the mine was a resultof an analogue radio link being built between Long-yearbyen and Svea, via Isfjord radio and a smallstation on Cape Martin. This connection opened on28 August 1984.

The satellite antennae at Isfjord radio has a diameterof 13 metres. Notice that the antennae is nearlypointing parallel with the terrain. This is becausethe elevation angle is only approximately 3 degrees

ISSN 0085-7130 © Telenor ASA 2004

Page 95: 100th Anniversary Issue: Perspectives in telecommunications

93Telektronikk 3.2004

In August 1998 the radio link station on Cape Martinburned to the ground and a new connection to theSvea mine had to be established. This time a radiolink route was chosen from Longyearbyen via a radiolink station on top of the Skolten mountain (appr.1000 metres above sea level), to a repeater on theDeinbolt mountain, and then down to Sveagruva.

With satellite connections to the mainland it was alsopossible to transmit NRK’s live TV-broadcasts toSvalbard. As previously mentioned this took place inDecember 1984. Not everyone was happy to receivelive television broadcasts and this became a topic ofdiscussion. Those who wanted to keep the old set-upcomplained that they would lose the opportunity tobroadcast repeats, which would be negative for shiftworkers.

Sveagruva received live broadcast television in 1986,and Ny-Ålesund in 1987. The Norwegian settlementsof Longyearbyen, Sveagruva and Ny-Ålesund nowhad satisfactory telecommunications. Stereo radiobroadcasts came in 1988, and in 1990 a videoconfer-ence studio was set up in Longyearbyen. The pagerservice was implemented in 1991 and in the sameyear attempts were made for live broadcasts frommobile cameras via satellite. These broadcasts weresent between Longyearbyen and the North CapePlateau.

In 1989 there were still two Russian settlements thatonly had radio connections to the rest of the world.They were Barentsburg and Pyamiden, and althoughtheir total population exceeded that of the Norwegiansettlements on Svalbard, they lacked the communica-tion technology that Norwegian settlers had. Therewas great concern for these places to be connected tothe international telecommunications network on thesame level as the Norwegian settlements. After longnegotiations with the Russians this idea was realisedand both locations were connected to the Norwegiantelecommunications network on Svalbard during thesummer of 1989.

On the mainland digitalisation was well underwayand naturally the question of digitalisation on Sval-bard was discussed. According to Telenor’s digitali-sation schedule, Svalbard would be the last area ofthe country to be digitalised. But as it turned out thiswork was brought forward considerably. The situa-tion of the Svalbard telecommunications area at thattime was that professional operations were theresponsibility of Telenor and thereby the (Norwe-gian) Ministry of Transport and Communication, butfinancially the Svalbard telecommunications area wasthe responsibility of the (Norwegian) Ministry of Jus-tice and the Police. In order to get the necessary fund-

ing for digitalisation, this would have to be allocatedvia the Svalbard budget of the Ministry of Justice andPolice. Previous experience had shown, however, thatgetting the necessary investment capital from thatauthority was rather difficult. Efforts were thereforemade to remove the telecommunications activitieson Svalbard from the Svalbard budget and over toTelenor’s regular budget. After a lot of work and casehandling in Telenor and in various ministries, it wasresolved that Telenor Svalbard would be removed fromthe Svalbard budget with effect from 1 January 1991.

This decision became known well ahead of this date,and therefore preparations were made for the con-struction of a digital switchboard in Longyearbyen.This was erected and made operational on 19 October1990. The analogue KV exchange was not yet nineyears old when it was replaced.

It is perhaps a curiosity that Svalbard was the firstfully digitalised telecommunications area in Norway.

Skolten radio link station on the Longyearbyen to Sveagruva connectionlies approximately 1000 meters above sea level and is normally onlyaccessible by helicopter

Pho

to:

Per

Kro

kan

ISSN 0085-7130 © Telenor ASA 2004

Page 96: 100th Anniversary Issue: Perspectives in telecommunications

94 Telektronikk 3.2004

The telecommunications area’s humble content andsize was of course the reason for this, making it diffi-cult to compare it with other telecommunications areas.

The digitalisation provided opportunities for new ser-vices, among them ISDN, a popular and sought-aftersolution. An upgrading and expansion of the ex-change in April 1999 provided a long-awaited possi-bility to satisfy a greater demand for ISDN. With thedigitalisation, Ny-Ålesund also acquired the samepossibilities as Longyearbyen. Until 1996 the localtelecommunications network in Ny-Ålesund wasowned by Kings Bay AS until Telenor Svalbard tookover in 1996.

In connection with the 400-year anniversary of thediscovery of Svalbard, Telenor Svalbard issued theirown phonecards. They were released on 17 June1996, and in the same year the mobile telephone wasintroduced. A GSM base station covering Longyear-byen and its environs was put into operation and laterexpanded by setting up base stations on Skolten andin Sveagruva. This year Barentsburg also receivedmobile phone coverage. Internet became accessibleon Svalbard when an Internet node was put into oper-ation in 1997.

Organisation of the operations on Svalbard

In the course of the 1990s Telenor underwent severalrestructuring processes, including the cessation of thegeographical organisation structure with telecommu-

nications areas and regions. To start with Telenorbecame more division-oriented and it later becamea limited company, as is well known.

The dissolution of the telecommunications areas alsohad an effect on operations on Svalbard. With a mod-est staff of less than 20 employees, the operationswere now split up into several areas (network, pri-vate, installation, administration, etc.), and each ofthese was managed by its respective overarching uniton the mainland. This form of organisation was notsuitable, as it did not allow for local coordination ofthe operation. After a few years experience, it wasdecided to consolidate the operations on Svalbard andto give the coordinating responsibility to the on-sitemanager. In 1996 it was decided that Telenor’s opera-tions on Svalbard would be united in one privatecompany, a subsidiary of the Telenor group. TelenorSvalbard AS became operational on 1 January 1997.The company had its local management on Svalbardand since then has had its headquarters in Longyear-byen.

The company today has 10-12 employees and man-ages to a great extent all of the tasks which the opera-tion is assigned. In some cases expertise is hired infrom other Telenor units. Svalbard radio, with itscrew of six employees, does not belong to TelenorSvalbard AS, but is connected to Telenor Networksmaritime department.

The present-day main building at Isfjord radio was erected in 1957. Before the radio office was moved toLongyearbyen, 12–15 people worked and lived here. Now the station is no longer staffed but is remotelymonitored from Longyearbyen

Pho

to:

Her

ta G

rønd

al

ISSN 0085-7130 © Telenor ASA 2004

Page 97: 100th Anniversary Issue: Perspectives in telecommunications

95Telektronikk 3.2004

Development in the satellite area requires

new solutions

As far back as the late 1980s several players showedan interest in Svalbard’s unique location in terms ofsatellites in polar orbits. In simple terms, the furthernorth you are, the more orbits you can register from apolar orbiting satellite. So many establishments werelooking for a location in the far north where onecould read and download data from such satellites.Svalbard, with its extensive infrastructure and easyaccess to an airport, was clearly a natural and well-suited place. The only question was where on Sval-bard. Several places were considered: Ny-Ålesund,Isfjord radio and Longyearbyen. When the decisionwas finally made, Platåberget at Longyearbyen waschosen as its location and the establishment is namedSvalSat. Here data are downloaded from severalsatellites in polar orbits. Because the downloadeddata were going to be conveyed further, this develop-ment also had significant consequences for TelenorSvalbard. It became necessary to increase capacityon the radio link network between Longyearbyenand Isfjord radio for redirecting the data via satelliteto the desired destination in the world. In addition aconnection had to be established between SvalSat andLongyearbyen by use of fibre optics, which were latersupplemented with a radio link. The transmissioncapacity of the satellite connection from Isfjord radiohad to be expanded considerably. The number ofSvalSat users increased, which required more andmore capacity. The users felt that leasing of satellitecapacity was so expensive that it was necessary toconsider the possibility of other communicationssolutions from Svalbard.

For this reason the idea of a fibre optic cable betweenSvalbard and the mainland was launched in 2002.Telenor was invited to participate in financing sucha project, but in the end they refused, which led toNorsk Romsenter (Norwegian Space Centre) realisingthe project in 2003, in cooperation with the users ofSvalSat. The fibre optic cable, i.e. two cables in theirrespective routes, was officially put into use in Jan-uary 2004. Each of the cables is 1300 kilometreslong, and Telenor Svalbard has now transferred itsown connections from satellite to this fibre optic cable.

Even though Telenor declined to becoming owner orpartial owner of the fibre optic connection, bothTelenor and Telenor Svalbard were heavily involvedin the construction phase. Through this solution Sval-bard has capacity and high quality telecommunica-tions connections superior to most places on themainland.

The development of telecommunications on Sval-bard, from its inception with radiotelegraphy to fibre

optic technology being implemented on the mainlandtook over 92 years. It would however be fair to saythat the changes in the past 20 years have transformedSvalbard from a “developing country” in terms ofcommunications to being a state-of-the-art teletech-nological society.

Changes as a consequence of newtechnologyFor the Telenor employees on Svalbard the work situ-ation and work tasks have changed in keeping withthe technological developments. For the employeesresponsible for the technical side of operations, thesedevelopments have presented new challengesdemanding increased expertise. The staffing level hasbeen under continuous assessment and has alwaysbeen kept at a minimum level. Isfjord radio, whichhad a staff of 12-15 persons before the radio officewas moved to Longyearbyen, had a staff of 4-5 per-sons for many years. In 1999 all staff were removed,and it is now totally remotely operated/monitoredfrom Longyearbyen. After the implementation of thefibre optic cable to the mainland, Isfjord radio is nolonger as important as it once was.

Flexibility and enthusiasmIn many ways a lot of the work on Svalbard can beconsidered as pioneering work. Expertise and experi-ence in the construction of telecommunications inarctic areas have been difficult to come by. On-sitelearning by doing has been necessary. In total, animpressive piece of work has been done in this area.When as a rule results have been successful, this isnot least because of the enthusiasm and tenacity thatemployees have always shown. No task has been toobig or too small to be realised. With the minimumlevels of staff which the operation has always had,the flexibility of the employees regarding execution

Telenor Svalbard’s administration building in Longyearbyen

Pho

to:

Her

ta G

rønd

al

ISSN 0085-7130 © Telenor ASA 2004

Page 98: 100th Anniversary Issue: Perspectives in telecommunications

96 Telektronikk 3.2004

of tasks in progress has been especially important. Asa matter of course everyone would make an effort andcooperate for tasks to be solved. It was never a ques-tion of what one knew or was able to, but of what onehad to do. This way the job of the employees has

been interesting and varied. Without this flexibilityand enthusiasm exhibited by employees, there is goodreason to believe that the evolution to present-daytelecommunications on Svalbard would have takenconsiderably longer.

Viggo Bjarne Kristiansen (63) started his career in Televerket (Telenor) in 1958 as a telecommunications

installer. Through internally sponsored education he later advanced to work on telephone automation. He

has held various management positions in Telenor, and also in the Norwegian labour union of telecom engi-

neers (TMLF/DNTO). He attended the Norwegian school of Administration in 1984 and the Norwegian

Defence Academy in 1990–91. In 1994 he became Department manager on Svalbard and advanced to

Managing director for Telenor Svalbard from 1997. He retired from this position in 2004.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 99: 100th Anniversary Issue: Perspectives in telecommunications

97

Part A The establishment

1 What INMARSAT is

INMARSAT operates a constellation of geostationarysatellites which extend mobile telephone, fax anddata communications to every part of the world,except the polar regions.

INMARSAT was conceived and established as aninternational and fully global organization. The pur-pose of INMARSAT was to make provision for thespace segment necessary for improved maritime com-munications and, in particular, for improved safety oflife at sea communications and the Global MaritimeDistress and Safety System (GMDSS). This purposewas later extended through amendments to the consti-tution instruments to provide space segments also forland mobile and aeronautical communications, andthe name of the organization was changed to theInternational Mobile Satellite Organization (IMSO)to reflect the extended purpose.

INMARSAT owns (or leases), and operates, commu-nication satellites – the space segment – for mobilecommunications. It sells capacity on the satellites toearth station operators, later termed distribution part-ners, who use the capacity to produce and sell tele-communication services to end users2). End users aremobile stations on ship, aircraft, and mobile users onland, and “fixed” land users. In addition to operatingthe space segment, INMARSAT specifies the com-

munication standards required for using the spacesegment, and controls and approves all stations whichseek access to the system.

The establishment of INMARSAT was based on twointernational public law instruments developed underthe auspices of the International Maritime Organiza-tion (IMO). These are:

a Convention on the International Maritime SatelliteOrganization (INMARSAT) between State Partiesto the Convention; and

b Operating Agreement between telecommunicationsentities public or private (one per Party) called“Signatories” designated by a State.

INMARSAT was structured with three principalorgans:

a The Assembly of Parties (one State, one vote),which dealt with general policy matters and thelong term objectives of the Organization;

b The Council, composed of 22 Signatories, orgroups of Signatories. It decided on all financial,operational, technical and administrative matters,and made provision for the space segment for car-rying out the purposes of the Organization. Signa-tories’ voting rights were linked to their utilizationof the system via investment shares;

Ole Johan Haga

retired in 2000

after working in

various positions

in Telenor for

40 years

Telektronikk 3.2004

It is well known that the establishment of the International Maritime Satellite Organisation –

INMARSAT– has been a great success. Norway, and notably NTA, played a substantial, some believe

a determining, role in its creation. Today, the INMARSAT system supports links for telephone and data

communications to more than 287,000 ship, vehicle, aircraft and other mobile users. For the

customers, notably the ship owners, it has allowed the introduction of modern management of their

fleets. For Telenor the provision of mobile satellite services has been, is, and should continue to be,

an outstanding commercial success. For the world, INMARSAT has been a glorious example of peace-

ful cooperation between all parts of the world, in a grand project of great practical value for communi-

cation and commerce, already at the time of the “cold war”. The main emphasis of this paper is on the

establishment of INMARSAT. Part A describes the activities which led to the formation of INMARSAT

in July 1979. The contentious aspects and their resolution are described in some detail. Later

developments are briefly described in part B. Part C covers the participation of Telenor in the

organization, as well as Telenor’s utilization of the space segment.

INMARSAT – a success story!How it was established. Later developments. The role of Telenor – former NTA1)

O L E J O H A N H A G A

1) NTA = The Norwegian Telecommunications Administration.2) The term “end user” is used to designate the (ultimate) user – the customer – of the service, to distinguish him from the user of the

space segment, who is the earth station operator.

ISSN 0085-7130 © Telenor ASA 2004

Page 100: 100th Anniversary Issue: Perspectives in telecommunications

98 Telektronikk 3.2004

c The Directorate, which was the executive body ofthe Organization headed by a Director General,who was the Chief Executive Officer and legalrepresentative of the Organization.

Investment in the space segment was originallyfinanced by the owners of the organization. The own-ers were signatories to the Operating Agreement.INMARSAT pays compensation for use of theinvested capital, and repays the capital contributionswhen revenues allow repayment.

INMARSAT receives revenues from the sale ofcapacity on the space segment. The revenues shallcover all administrative and operational costs of oper-ating the space segment, as well as financial costsrelated to the investment in the space segment.

It is important to underline that the provision of sharecapital to INMARSAT is a financial undertaking,quite separate from the business of producing andselling communication services to end users.

The economy of the organization has been sound.Utilization of the capacity has been good and, due tolack of competition, it was for a long period possibleto set the tariffs at such a level that the owners re-ceived a comfortable compensation for invested capi-tal. In fact, it is specified in the (original) OperatingAgreement that compensation for capital provided bysignatories should be 17 per cent per annum.

INMARSAT has not been unaffected by the changeswhich have taken place in the telecommunicationsindustry, namely liberalization and fierce competi-tion. To meet the new situation and new challenges,INMARSAT was changed into a private corporatestructure in 1999.

2 National initiatives

For thousands of years ships plying the high seassailed without any communication with land, andtheir business was adapted to that situation. About ahundred years ago radio entered the maritime scene.Radio communications on short and medium wave-lengths (HF and MF) radically changed the sailingconditions. A comprehensive global safety at sea sys-tem was gradually developed, which over the yearshas saved innumerable lives and large material val-ues; ship owners could have contacts with their ships;and seamen could talk to their families while at sea.

Even if HF radio was of tremendous importance toshipping, it suffered from certain defects, notablypoor regularity in many ocean areas. With the adventof satellite technology and telecommunications viasatellite in the sixties, it was quite natural that theidea of utilizing satellites for communication alsowith ships at sea was considered. The problems, how-ever, seemed formidable. In order to enable antennasof reasonable size on board ships, the output powerfrom the satellite towards the ship has to be high, inparticular in the case of geostationary satellites. This

Figure 1 INMARSAT Headquarters in London. It accommodates the administration, as well as the operation centre

ISSN 0085-7130 © Telenor ASA 2004

Page 101: 100th Anniversary Issue: Perspectives in telecommunications

99Telektronikk 3.2004

implies large and expensive satellites. Economy thendictates extensive, preferably global, cooperation toestablish the space segment. Intricate systems foraccessing the satellite on a shared basis had to bedeveloped. Development of steerable antennas for theship required considerable R&D effort. On the basisof early experiments Comsat of USA concluded thatmaritime satellite communications “could be imple-mented with available technology, but the economicsof the project did not justify a commercial venture”.

With such challenges research institutions in Norwaywere skeptical. Reflecting the general attitude, FinnLied, in the capacity of chairman of the physics com-mittee of NTNF3), was skeptical. Bjørn Rørholt, atthe time chairman of NTNF’s committee for spaceactivity, advised strongly against committing fundsand efforts in this area. “I think we will do Norwe-gian-owned industry a disservice by leading theminto such a demanding development as a ship termi-nal at this stage”, he warned Finn Lied. Even thepotential users, the ship owners, did not show anyinterest. NTA took quite an opposite view. Early1968 NTA (responsible Per Mortensen) providedsome funding for ELAB4) for experiments with shipequipment working in 1.5 GHz frequency band. In1969 NTA’s research institute TF initiated a study onmaritime satellite communication, where A/S NERAcontributed greatly. TF with Nic Knudtzon as itshead, was a driving force in these efforts. A tri-partyconstellation of NTA, EB/NERA, and ELAB largelydominated further development. The result of theseearly studies enabled Norway to table concrete andwell founded proposals to the ITU WARC (Space) in1971, which allocated frequency bands to maritimesatellite communications.

The 1971 WARC was exclusively devoted to spacecommunication, and several new problems had to beresolved. Both radio frequencies and the geostation-ary orbit are limited natural resources. To mentionone interesting and important issue, the Conferencedecided on an equal right for all countries to use theseresources. Some equatorial countries had claimedsovereignty of its part – its arc – of the equatorialorbit.

The Norwegian contribution at WARC 71 was sub-stantial – in my opinion material – to the satisfactoryresult, with regard to maritime satellite frequencies.There was strong opposition to allocating frequenciesto such a service, even from the United States. ArneBøe of the Norwegian delegation, should in particular

be mentioned as being very competent and effectivein the tough negotiations.

It is in my opinion reasonable to conclude thatNorway with NTA took a leading role in these earlystages of developing maritime satellite communica-tions.

3 Establishment of a new

inter-governmental organization

– The basic problems

Norway with NTA was convinced that a global satel-lite system should be developed. In 1972 IMO set upa Panel of Experts on Maritime Satellites to study thetechnical, operational, administrative, and institu-tional aspects of a maritime satellite system.

The creation of a new inter-governmental organ-ization for the provision of the space segment wasfirst proposed by the Soviet Union, who tabled a draftconvention to this effect between the first and the sec-ond meeting of the Panel of Experts. At Norway’sproposal the Panel decided on its second meeting inApril-May 1973 to start immediately to work out aConvention based on the Soviet proposal. A completedraft was developed in the course of the Panel’s work.

The Panel of Experts also considered other alter-natives for organizational arrangements for a mar-itime satellite system, viz. to create a consortium or tomake use of an existing organization such as IMO orINTELSAT. For various reasons these other solutionsattracted little support. A consortium was rejectedbecause almost all delegations felt that policy controlover the global maritime satellite system should beexercised by an inter-governmental organization.Telecommunication administrations strongly opposedthe idea of IMO running a communication system –as IMO’s primary concern was and should remainthat of safety at sea, so IMO was rejected. INTEL-SAT was acceptable to the USA and some othercountries. Other important maritime countries, how-ever, with little or no interest in INTELSAT’s fixedtraffic, strongly believed that they would get too littlecontrol over the maritime service facilities if INTEL-SAT were chosen. The absence of some importantmaritime countries in INTELSAT – notably theUSSR – was also regarded as a serious disadvantage.The USA, on the other hand, was critical to the entireidea of creating a new inter-governmental organiza-tion. However, at the first session of the INMARSATConference the United States had decided to supportthe creation of INMARSAT, but stated that it was

3) The Royal Norwegian Council for Scientific and Industrial Research.4) ELAB: Research Institute, at the time organized and located as an extension of the Norwegian Institute for Technology (NTH).

ISSN 0085-7130 © Telenor ASA 2004

Page 102: 100th Anniversary Issue: Perspectives in telecommunications

100 Telektronikk 3.2004

impossible for the US Government to participate ina commercial organization and that the US Govern-ment could not undertake economic responsibilitiesor even give any guarantees of an economic nature.US participation would therefore have to be througha private commercial company. The simplest arrange-ment would be to follow the INTELSAT pattern, i.e.one agreement between governments on basic princi-ples and organizational questions and a second agree-ment between the private or public telecommunica-tion entities which were to finance and operate thesystem. The US view was supported by some coun-tries, but the majority rejected the complicatedarrangement proposed by the US. These delegationsargued that since telecommunication services wereoperated by public corporations in most countries, anorganization along the lines of the US proposal withall its inherent complications and delays would beunreasonable.

4 The package deal

Due to the difference of opinion on this and someother basic questions the first session of the Confer-ence was not able to make much progress. However,in the last days – or rather nights – of that session,informal talks were held between the delegations ofthe USA, Japan, the Soviet Union, the United King-dom, Norway, the Federal Republic of Germany,France and the German Democratic Republic. Theinformal negotiations led to the result that the West-ern Europeans accepted the fundamental US demandthat the member States could transfer all financial andoperational functions, responsibilities and liabilitiesin the organization to so-called “designated entities“.The Western European delegations also accepted thatthis principle be put into effect by splitting the draftConvention into two agreements, one between gov-ernments (“Parties”) and one between “designatedentities” (“Signatories”). The acceptance of this con-cept was, however, conditioned upon the text of thePanel of Experts being used as a starting point. Fur-thermore, the USA and the Western European delega-tions agreed on a compromise solution for distribu-tion of functions between the Assembly and theCouncil, and also on the procurement policy. Thesecompromises in the first session of the INMARSATConference were referred to as “the package deal”.

5 The distribution of functions between

the Assembly and the Council

There was agreement that the Organization shouldhave three organs: An Assembly where all membergovernments (Parties) would be represented, havingone vote each; a Council, where the Signatories hav-ing large investment shares would be represented andvote in proportion to their shares; and a Directorate.

The distribution of functions between the organs asagreed in the “package” mainly followed the US pointof view that the main power of the Organization shouldbe vested in the Council and not in the Assembly.

The large investors in the European camp did nothave much difficulty in accepting the US view in thismatter – in fact they preferred the US solution – sincethey would have a strong influence in the Council,where voting is weighted by the investment sharesthey hold, and would have relatively much less influ-ence in the Assembly, where all members are repre-sented, each having one (un-weighted) vote. Con-versely, the small investors would for the same rea-sons, which for them have the inverse effect, wantthe Assembly to have at least some real power. Thesmallest investors would not even have a seat in theCouncil. The original compromise on the division ofpower caused considerable difficulty at a later stagefor small investors, in particular those from the ThirdWorld. The division of power agreed in the “pack-age” was not changed, but small investors were ac-commodated to a certain extent by the adoption of alarge Council (22 Representatives) – against the wishof the large investors – and by specifying that four ofthe 22 representatives be elected to ensure just geo-graphical representation, with due regard to the inter-ests of the developing countries. A Signatory electedto represent a geographical area would be obliged torepresent each Signatory in that area which is notamong the 18 countries represented on the Council byvirtue of their investment shares. This will give alsodeveloping countries with small investment sharessome voice in INMARSAT’s decision making.

Another and related problem was the voting proce-dure in the Council. The positions were that mostWestern European delegations argued thatINMARSAT was a commercial undertaking whichhad to make decisions in an efficient and timely man-ner. They proposed that the minimum requirement fora positive decision should be one third of the repre-sentatives representing a majority of the total invest-ment shares. Other delegations, including many smallinvestors and the US, argued that it was essential thatthe important decisions of INMARSAT had broadsupport. The small investors were clearly afraid thata few large investors could force through decisionsagainst a majority. The USA and probably manyother countries were seriously concerned about thestrong position Western Europe would get in theCouncil. If decisions could be made by only a major-ity, Western Europe might be able to force throughe.g. procurement decisions with little or no extra sup-port. These delegations therefore proposed that therequirement for decisions should be the majority ofthe representatives representing two-thirds of the

ISSN 0085-7130 © Telenor ASA 2004

Page 103: 100th Anniversary Issue: Perspectives in telecommunications

101Telektronikk 3.2004

investment shares. None of the alternatives got suffi-cient support to be adopted by the Conference – untilthe United Kingdom suddenly changed its positionand voted for the two-thirds alternative – to greatdisappointment to fellow Europeans.

6 The procurement policy

The negotiations on procurement policy repeatedsome of the confrontations and arguments from thenegotiations of the INTELSAT definitive arrange-ments. The INMARSAT negotiations were, however,less lengthy and less complex – probably because theground had been covered before. The situation wasessentially as follows:

The USA held the view that the sole consideration inawarding procurement contracts should be best qual-ity, price and delivery time, since only uninhibitedcompetition would provide the cheapest and best ser-vice to the users. The Europeans, supported by mostother countries, accepted that best quality, price anddelivery time should be the dominant consideration,but wanted the idea of developing and maintainingworld-wide competition to be included as an addi-tional consideration.

The main European motive was clearly a desire tobreak the de facto American monopoly in space tech-nology. They further argued that in the long run abroad base of suppliers would increase competition,

thus reducing costs to INMARSAT and maritimeusers.

The “package deal” text states that the procurementpolicy shall be such as to encourage worldwide com-petition and to this end award contracts, based onresponses to open international tender, to biddersoffering the best combination of quality, price anddelivery time. The idea of competition is givenprominence in the Convention text and obliges theOrganization to observe this aspect actively throughall the stages of the procurement procedure. The com-petition aspect is more ‘active’ in the INMARSATtext than in the corresponding INTELSAT provision.

The Soviet Union participated in the informal negoti-ations and accepted two of the compromises of the“package deal”, viz. the distribution of power be-tween the Assembly and the Council and the procure-ment policy, but could not accept that member Statescould transfer all the economic responsibilities andliabilities to private entities nor that the draft Conven-tion be split into two agreements. The first session ofthe INMARSAT Conference therefore ended withouta result. The Conference appointed an IntersessionalWorking Group, which was charged with the task oftrying to resolve the fundamental problems, andthereafter develop complete draft agreements basedon the draft of the Panel of Experts and on any con-tributions submitted later. The Intersessional Work-

Figure 2 A tricky problem is being resolved through intense informal discussions. Mr. Freeman of US StateDepartment surrounded by Professor Seyersted, Norway, upper right hand corner; Dr. Kolodkin, USSR, left,Secretary General of IMO, wearing glasses; Mr. Kolossov, USSR, upper middle with red tie

Pho

to:

O.J

. Hag

a

ISSN 0085-7130 © Telenor ASA 2004

Page 104: 100th Anniversary Issue: Perspectives in telecommunications

102 Telektronikk 3.2004

ing Group met three times in the summer and autumnof 1975. Before the first meeting of the WorkingGroup, the delegation of the Federal Republic ofGermany undertook to split the draft of the Panel ofExperts into two agreements. A group of WesternEuropean – including Norwegian – delegates alsoheld informal talks with the Soviet delegation inMoscow and the US delegation in Washington duringthe summer of 1975.

With some issues still unresolved a draft Conventionand a draft Operating Agreement were submitted tothe second session of the INMARSAT Conference.

7 The limitation of voting power in the

Council

The important outstanding point at the end of the sec-ond session of the Conference was the upper limit ofthe voting power of a single Signatory in the Council.

The USA wanted no such limitation – in their vieweach Signatory should be entitled to vote its fullinvestment share. The majority, however, wanted toprevent any Signatory from falling anywhere near toa veto power in the Council. After intense negotia-tions the Conference came very close to an accept-able compromise. The US accepted the principle oflimitation and could – with considerable difficulty –accept a 25 per cent ceiling, a figure Western Europecould accept without much difficulty. The instruc-tions did not, however, permit the Soviet delegationto go above 20 per cent, and since the Conferencefocused on this problem at a late stage in the session,it was impossible in the short time available for theSoviet delegation to get new instructions. A third ses-sion of the Conference therefore had to be called tosolve this problem. A few other minor points werealso left for the third session.

After a certain amount of informal contact betweenthe most interested delegations in the interim period,agreement was reached on a 25 per cent voting limitwith some qualifications, and the Conference adoptedunanimously the Convention and the OperatingAgreement at a short third session in September1976.

In the foregoing I have dealt with the basic problems.In such a formidable undertaking – to arrange for acommercial venture in which all parts of the worldwere to participate – there were obviously also otherareas of dispute. The most difficult additional prob-lem was that of agreeing on investment shares – bothinitial and long term.

8 Long term investment shares

The Conference accepted without difficulty the gen-eral principle, that the investment share of a Signa-tory shall be based on that Signatory’s percentage uti-lization of the space segment. However, the applica-tion of this principle to the maritime situation createsproblems which do not arise in the case of fixed satel-lite service. Maritime satellite traffic must be handledon the basis of demand assignment of a channel forthe duration of a call. How is then the utilization ofthe space segment by a Signatory to be measured?

The USA, supported by a few delegations, proposedthat calls originating and terminating in the land terri-tory of a Signatory should be attributed to that Signa-tory for the purpose of determining the investmentshare. The majority proposed that the origin of a callshould determine its attribution in all cases. Althoughlogical and in accordance with fixed service practice,the majority proposal would give the USA in particu-lar an unreasonably low share, and a fairly compli-cated compromise formula was finally adopted. Uti-lization in both directions is divided into two equalparts, a ship part and a land part. The part associatedwith the ship where the traffic originates or termi-nates is attributed to the Signatory of the Party underwhose authority the ship is operating. The part associ-ated with the land territory where the traffic origi-nates or terminates is attributed to the Signatory ofthe Party in whose territory the traffic originates andterminates. Signatories of flag of convenience coun-tries were granted exception from the formula: theirshare may upon application to the Council be reducedto twice the land part, but not below 0.1 per cent.Flags of convenience in this context are countrieswhere the ratio of the ship part to the land part ex-ceeds 20 : 1. The considerations behind this exceptionwere that flag of convenience countries could notafford to become members under the adopted generalformula and that the majority of the delegations pre-ferred to have these important maritime countries inas members even if they could not take their fairshare of the investment burden.

9 Initial investment shares

Initial investment shares had to be determinedthrough negotiations at the Conference, since Sig-natories’ utilization of the space segment could notbe known in advance. It was agreed that initial sharesshould reflect expected usage. HF radio traffic, num-ber of ships and tonnage of the merchant fleet of theSignatories were the relevant factors on which theassessment was to be made. The delegations madebids based on their own estimates. It proved to beextremely difficult to reach a total of 100 per cent ofinitial shares. The main reasons were that some largemaritime countries – notably flag of convenience

ISSN 0085-7130 © Telenor ASA 2004

Page 105: 100th Anniversary Issue: Perspectives in telecommunications

103Telektronikk 3.2004

countries – were unable to accept what was thoughtto be natural shares and that others were unwillingto take up the resulting shortfall, probably becauseit was expected that INMARSAT would suffer eco-nomic losses for a fairly long period. There was alsosome disagreement as to what weight the variousfactors – HF traffic, number of ships and tonnage –should be given when assessing expected satellitetraffic. It took long hard hours of negotiation in Ple-nary sessions, working groups and informal meetings,and considerable good will, before the 100 per centtarget finally was reached in the Plenary in the lastbut one night of the second session of the Conference.

It is ironic to note that in the months immediatelyprior to entry into force of the agreements, a decisionby the US to increase its share from 17 to 30 per centcaused a struggle to avoid a consequent reduction ofthe initial shares of other Signatories. An increase ispermitted under a provision which was included inorder to secure entry into force despite the highthreshold figure of 95 per cent. The substantial uni-lateral increase by the US was feared by many toupset the relative balance of power in the Council toan unacceptable degree. The only way to restore thebalance would be for all or most of the other Signato-ries to increase their shares by a similar factor as thatof the US. The results of such actions would be a sub-stantial oversubscription of shares at entry into force.Upon proportional reduction of all shares at entry intoforce, to get a total of 100 per cent, the original bal-ance would be restored. The struggle culminated in arace in the evening of July 15, 1976. Each increase bythe US was met with corresponding increases by theothers. It was the US against the rest of the world.This went on until minutes before entry into force ofthe agreements at midnight. The latest increase by US

was unknown (it was later known to be 65 percent),so it was met with a joint declaration by the others,that we increase by an amount proportional to thelatest increase of the US. Thus the balance was kept,but the US protested against the method, so the for-mal existence of INMARSAT started with a conflictbetween the US and the rest of the world. This wasof course very unfortunate but, on a positive note,the incident also illustrates the fact that the economicprospects for INMARSAT were considered then tobe much more promising than they were three yearsearlier. Table 1 shows the resulting initial investmentshares (in rounded figures).

Despite the conflict, the Council commenced workimmediately, with its first meeting starting the fol-lowing day, on 16 July 1979. This first meeting waschaired by the author of this paper.

Luckily, the conflict was resolved at the second meet-ing of the Council.

10 Other problems

• Legal questions such as liability, arbitration andprivileges and immunities were difficult to resolveand required lengthy negotiations. Some newground was covered which could be of value alsoin other contexts.

• Formulation of financial principles created somedifficulty. Some delegations wanted to stress thatthe only basis for INMARSAT’s operations shouldbe “accepted commercial principles”. Others dis-agreed with this being the only basis. The resultingcompromise was that INMARSAT shall operate ona sound economic and financial basis “havingregard to” accepted commercial principles.

Country Per cent Country Per cent Country Per cent

USA 23.5 Canada 2.6 P.R. of China 1.2

USSR 14.2 Kuwait 2.0 Belgium 0.6

UK 9.9 Spain 2.0 Finland 0.6

Norway 7.9 Sweden 1.9 Argentina 0.6

Japan 7.0 Denmark 1.7 New Zealand 0.25

Italy 3.4 Australia 1.7 Bulgaria 0.1

France 2.9 India 1.7 Portugal 0.1

Western Germany 2.9 Brazil 1.7 Algeria 0.05

Greece 2.9 Poland 1.7 Egypt 0.05

The Netherlands 2.9 Singapore 1.7

Table 1 Initial investment shares

ISSN 0085-7130 © Telenor ASA 2004

Page 106: 100th Anniversary Issue: Perspectives in telecommunications

104 Telektronikk 3.2004

• The relation to other (competing) space segmentswhich Parties might wish to establish also requireda compromise text: INMARSAT shall expressviews in the form of a “recommendation of a non-binding nature” with respect to technical compati-bility and possible economic harm.

• The language question brought about a collisionbetween practical/economic and prestige/politicalconsiderations. Many official languages5) andespecially many working languages would be verycostly. A requirement to translate all documentsinto several languages slows down the work atconferences, and may even be used tactically as ameans to delay proceedings. Norway tabled a verycontroversial document in which it was demon-strated that, on the basis of investment shares andnumber of countries, English, ‘Scandinavian’ andRussian would be the most important languagesin INMARSAT. On this background, and for eco-nomic reasons, Norway proposed that INMARSATshould work only in the English language. It waspolitically impossible for the US and UK delega-tions to support this position, even if they agreed.The French and Spanish delegations were furiousat the demonstrated statistics and the proposal.Since it was not possible to attract sufficient sup-port for an article as proposed by Norway, the tac-tics was to block the adoption of any language arti-cle. This tactics succeeded, so the result was thatno language article was adopted, leaving it to the

Assembly and the Council to decide which lan-guages to use. This matter was important only forthe Council. With the very different voting powerdistribution in the Council, it was decided to workwith only English documentation, but with inter-pretation between the four languages English,French, Spanish, and Russian in plenary sessions ofthe Council. Very few international organizationshave managed to adopt such language arrangement,which is very timesaving and therefore economical.

Norway was very active and contributed very muchto the work of the INMARSAT Conference. The coreof the Norwegian delegation consisted of Ole JohanHaga of NTA, Professor Finn Seyersted, representingthe Ministry for Foreign Affairs, and Kjell Skar, rep-resenting the Ministry of Communications. It is worthmentioning that Professor Seyersted was electedchairman of the drafting committee for the two agree-ments. Seyersted was a widely respected expert oninternational law, and it was largely due to his meritsthat the language of the agreements is more conciseand easier to understand by laymen, than is usual inEnglish and American law instruments. It is alsoquite exceptional that a delegate with non-Englishnative tongue is chosen as chairman of a draftingcommittee for law instruments written in the Englishlanguage. In addition to numerous concrete factualcontributions during the work of the conference, Nor-way was able to play a mediating and bridging rolebetween the more extreme positions held by variousdelegations, notably between the US and USSR. Seri-ous dissensions were due to differences in businessculture. Norwegian delegates spent long hours inhotel rooms with certain delegations in the eveningsafter the formal meetings, trying to explain why cer-tain representatives held such and such views.

11 The Preparatory Committee

The Preparatory Committee and its Panels wereactive in the period following the last session of theINMARSAT Conference in 1976 and the formalcoming into existence of INMARSAT in 1979.22 countries participated in its work. The Committeeprovided an Interim Report to Governments and pre-sented all its findings to INMARSAT. It studied tech-nical specifications, operational matters, economicand marketing matters, and organizational matters.It reached conclusions and made recommendationsin most of these areas, although some minor pointsof contention remained to be resolved. INMARSATthus had a well prepared basis on which to start itswork, once the ratification period was over.

Figure 3 Professor Finn Seyersted of the Norwegian delegation – intypical attentive stance (Photo: O.J. Haga)

5) In ITU terminology: “Official language” implies interpretation to and from that language in conferences; “Working language”implies, in addition, that all documentation shall appear in that language.

ISSN 0085-7130 © Telenor ASA 2004

Page 107: 100th Anniversary Issue: Perspectives in telecommunications

105Telektronikk 3.2004

12 The INMARSAT space segment and the

Joint Venture discussions

The potential members of INMARSAT had a generaldesire to ensure that there would be an operationalmaritime satellite system to follow on from theexisting US-owned MARISAT system, which wouldreach the end of its design life in 1981. Coupled withthis desire was the realization that, although the life-time of some of the MARISAT satellites mightextend beyond 1981, it would be necessary to preparefor the procurement of satellites for the follow-onsystem before INMARSAT itself could come intobeing. Since the Preparatory Committee was specifi-cally precluded from committing INMARSAT, dis-cussions were started between interested administra-tions with a view to establish a ‘Joint Venture’, whichcould conclude procurement contracts for follow-onsatellites. It was intended to offer these contracts toINMARSAT, when formed, and then dissolve theJoint Venture. The Joint Venture forum – later theJoint Venture Conference – studied various spacesegment options and developed these through defacto negotiations with potential suppliers. The resultof these efforts was that two suppliers – INTELSATand the European Space Agency (ESA) – had work inhand to manufacture complementary parts of a com-posite satellite system consisting of maritime subsys-tems included in three INTELSAT V satellites andthree dedicated satellites – MARECS – from ESA.Draft procurement contracts had also been prepared.Thus INMARSAT was in a position to take procure-ment action quickly based on a recommended spacesegment solution from the Joint Venture Conference.Since manufacture of these satellites was in progressand offers remained valid for INMARSAT, it was notnecessary to institute the Joint Venture formally.

As a result of the Joint Venture activity INMARSATwas in a position to have its own space segmentin operation from 1981. The Organization further hadthe benefit of the traffic base developed and the expe-rience gained by the MARISAT system.

13 Concluding remarks on Part A

It may appear from the preceding chapters that thecreation of INMARSAT had been nothing but a con-stant struggle to resolve controversies and problems.It is true that it had taken much time and effort to findworkable compromises where there were differenceswhich, incidentally, is quite normal in internationalcontexts. But this was fortunately not the whole story.Despite the differences, the areas of common interestand agreement dominated by far, and it is this fact,in addition to the good will to compromise whichprevailed among all, which made it possible to findworkable and acceptable solutions. Although it tookconsiderable effort to solve the various problems,more efforts were devoted to analyze requirementsand develop technical specifications and operationalprocedures, to carry out market studies and economicanalyses, and to prepare institutional matters such asorganizational structure, rules of procedure for theorgans, financial regulations and many other detailedmatters. INMARSAT, when finally established, waswell prepared to become a fully working and thrivingorganization.

Figure 4 The third generation INMARSAT satellite. Essential new fea-ture: seven spot beams (five active) in addition to global coverage. Thespot beams gave extra capacity in areas where demand from users washigh, and furthermore allowed the introduction of the mini-M standardmobile station – see Figure 6

INMARSAT generation I-1 I-2 I-3 I-4

MARISAT Intelsat V MAREX

Launch from year 1968 1980 1980 1991 1996 2005

Number of satellites 2 3 2 4 + 1 5

Capacity in equivalent 30 50 80 150 4 times that 10 times that

telephone channels of I-2 of I-3

Table 2 Generations of INMARSAT space segment

ISSN 0085-7130 © Telenor ASA 2004

Page 108: 100th Anniversary Issue: Perspectives in telecommunications

106 Telektronikk 3.2004

It is the first time in history that parties from sodiverse parts of the world joined in a large commer-cial undertaking. In addition to providing generalpublic maritime satellite communications in an effi-cient and economic way, the INMARSAT systemprovided a basis for developing an immenselyadvanced future distress system on a fully globalbasis. Furthermore, I strongly believe that the veryconstructive and positive cooperation we had seen inpreparing the ground for INMARSAT, and the coop-eration which developed within INMARSAT, con-tributed to bring East and West, North and South alittle closer together.

The lengthy maritime satellite negotiations haverequired a large number of international meetings.NTA had its fair share in hosting meetings. OnePreparatory Committee meeting, three Joint Venturemeetings and several working group meetings wereheld in Norway.

Part B Later developments

14 Development of service standards.

Extension to aeronautical and land

mobile

The original service standard, called INMARSAT A,provided analogue telephony and telex services. The

antenna of the ship station was mounted on a gyro-sta-bilized platform. The ship station was expensive andthe market comprised mostly large vessels. Over thetime the technology was refined to a large extent,greatly facilitated by the introduction of digital tech-nology. Standard B, operative from 1993, is a digitizedand improved version of Standard A; it has a widerspectrum of services and require half as much powerfrom the satellite, thereby allowing lower tariffs. Stan-dard C is the “lilliputian” station, which provides datatransmission at bit rate 600 bit/s. It supports data trans-mission, telex and distress messages. Standard M isalso based on digital technology. It provides telephonyat a low bit rate, in addition to data communication.Standard M and C are suitable for use on land, beingmuch smaller than Standard A and B. Telenor devel-oped in partnership with UK’s BT a mini-M lap-topterminal. Under the name Mobiq it was launched intoservice in 1997. The most recent development for themaritime market is the Fleet range of service, with thelargest antenna solution offering voice and data ser-vices with a capacity of up to 64 kbit/s.

As in all other areas of telecommunications, theInmarsat6) industry has seen a continued shift fromanalogue to digital services, as well as a shift fromvoice services to data and internet driven services.Inmarsat’s revenues from data services are nowhigher than for voice services, even in the maritimemarket.

With the development of new standards, satellitecommunication could be made available for use onland as well as on aircraft. INMARSAT thereforedecided to extend its services to land and aeronauticalapplications and amended its constitutional instru-ments to this effect. Amendments for land mobilewas adopted on 19 January 1989 and entered intoforce on 26 June 1997. Amendments for aeronauticalapplications were adopted on 16 October 1985 andentered into force on 13 October 1989.

The change of name to International Mobile SatelliteOrganization (IMSO) was adopted on 9 December1994 and entered provisionally into force immediately.

15 Introduction of the “Global Maritime

Distress and Safety System” (GMDSS)

Satellite communication readily lends itself to devel-oping a vastly improved maritime distress and safetysystem. Consequently, IMO decided that the newGMDSS should be obligatory for all new oceangoingships as from 1992, and for all ships from 1999.These decisions gave a strong boast to the develop-ment of the market for satellite communications.

Figure 5 Ship earth station Nera F77, which belongs to the so-calledFleet range of services; capacity 64 kbit/s for voice and data – globalcoverage. World market share 41 per cent

6) Lower case letters are used after privatization of the Organization.

ISSN 0085-7130 © Telenor ASA 2004

Page 109: 100th Anniversary Issue: Perspectives in telecommunications

107Telektronikk 3.2004

NTA and ELAB contributed significantly to thespecification of the system.

16 Privatisation of INMARSAT

After twenty years of successful operation, MemberStates and Signatories of INMARSAT decided tochallenge rapidly growing competition from privateproviders of satellite communications services andpioneered the first ever privatisation of all assets andbusiness carried on by the intergovernmental organi-zation while adhering to the continuous provision ofthe public service obligations and governmental over-sight as a pre-requisite of the privatisation.

At its Twelfth Session in April 1998, the INMARSATAssembly adopted amendments to the INMARSATConvention and Operating Agreement which wereintended to transform the Organization’s businessinto a privatised corporate structure, while retainingintergovernmental oversight of certain public serviceobligations and, in particular, the Global MaritimeDistress and Safety System (GMDSS). These amend-ments were implemented as from 15 April 1999,pending their formal entry into force. In doing so, itwas recognised that early implementation of the newstructure was needed to maintain the commercialviability of the system in a rapidly changing satellitecommunications environment, and thereby ensurecontinuity of GMDSS services and other public ser-vice obligations; namely: peaceful uses of the system,non-discrimination, service to all geographicalregions and fair competition.

The restructuring amendments entered into force on31 July 2001 and became binding upon all Parties,including those which had not accepted them, and theOperating Agreement terminated on the same date.

The restructuring of INMARSAT involved the incor-poration of holding and operating companies, locatedin England and registered under British law on15 April 1999. On the same day, the HeadquartersAgreement between the UK Government and theIMSO was signed. A Public Services Agreementbetween IMSO and the privatised Inmarsat was alsoexecuted with immediate effect. The OperatingAgreement was terminated and the Signatoriesreceived ordinary shares in the privatised InmarsatLtd in exchange for their investment shares. Futurecapital requirements will be met from existing share-holders, strategic investors and public investmentthrough a listing of the shares on a stock exchange(IPO). The INMARSAT satellites and all other assetsof the former intergovernmental organization havebeen transferred to the privatised operating companywhich continues to manage the global mobile satellitecommunications system, including maritime distress

and safety services for GMDSS at either no cost or ata special rate.

The residual intergovernmental organization (IMSO)continues with 88 Parties, operating through theAssembly of Parties, its Advisory Committee and asmall Secretariat, headed by the Director who is theChief Executive Officer and legal representative ofthe Organization. Under the relevant provisions of theConvention, as amended, the Public Services Agree-ment and the Articles of Association of the Company,IMSO is charged with overseeing, and under somecircumstances may enforce fulfilment of the Com-pany’s public service obligations and, in particular,GMDSS services. In performing this role, IMSO actsas the natural ally of IMO and watchdog of properprovisions and implementation of IMO’s require-ments in respect of GMDSS by Inmarsat Ltd. Tofacilitate these functions, an Agreement of Coopera-tion has been concluded between IMSO and IMO.Under a similar Agreement with the InternationalCivil Aviation Organization (ICAO), IMSO ensuresthat Inmarsat Ltd. takes into account the applicableICAO Standards and Recommended Practices in linewith the Public Services Agreement and regularlyinforms ICAO accordingly.

Administrative Arrangements have also been signedbetween the Secretary-General of the InternationalTelecommunication Union (ITU) and the IMSODirector. These provide the Organization with directaccess to the relevant bodies of the ITU, enablingIMSO to play an active role in the development ofinternational telecommunication policies.

In recent years, the horizons of mobile satellite com-munications are expanding with ever-increasingspeed, and there are several different options for thedesign and capability of new services. The adoptionby the IMO Assembly of Resolution A.21/Res.888 –

Figure 6 Application of INMARSAT M

ISSN 0085-7130 © Telenor ASA 2004

Page 110: 100th Anniversary Issue: Perspectives in telecommunications

108 Telektronikk 3.2004

Criteria for the Provision of Mobile Satellite Commu-nication Systems in the Global Maritime Distress andSafety System (GMDSS), has provided a clear indi-cation of IMO’s intention to consider opening up pro-vision of GMDSS services in the future to any satel-lite operator whose system fits these Criteria. How-ever, that remains for the future. At present, InmarsatLtd., with the satellite communications system that itoperates, is the sole global provider of these services,although, after the restructuring of INMARSAT, theprocess of liberalization and privatization of globaland regional satellite communications services is fastdeveloping.

Quoting the Director of IMSO, Mr. Jerzy W. Vonau,it is encouraging to note, in this context, that IMSOhas not been able to detect any reduction or deteriora-tion in the level and quality in the provision ofGMDSS services by Inmarsat Ltd. under the newregime, compared with the situation prior to privati-zation. All other public service obligations were alsoobserved or due attention has been given by the Com-pany. It may therefore be concluded that, as a resultof right political and legal decisions of the MemberStates, followed by a distinct but workable interfacebetween IMSO and Inmarsat Ltd, the restructuringand privatisation of the business and assets of the for-mer inter-governmental organization have paid offand the principles under which the process of restruc-turing took place have proved to be effective.

The formal structure of the new company is some-what complex, but the essential organs are InmarsatLtd, which executes all the activities of the Company,and its parent company Inmarsat Group Holding Ltd,which is a financial holding company owned by theshareholders. The main shareholders today are, withshare holdings in rounded figures:

• The venture capital companies Apax and Permira,who together hold 52 %

• Telenor 15 %• Lockheed Martin 14 %• KDDI (Japan) 8 %

In late 2003 most of the Signatories of “old”INMARSAT sold their shares in Inmarsat GroupHoldings Ltd, although some have retained a minuteshare in order to receive information. They remain,however, Parties to the residual intergovernmentalorganization IMSO. In addition to the ordinary share-holders, IMSO holds a “Golden Share”, which gives

it special rights in questions related to Inmarsat’sobligations with respect to certain public services,notably GMDSS. The Board of Directors of InmarsatGroup Holdings Ltd consists of seven directors, threebeing executive officers in Inmarsat Ltd, and fourbeing non-executives representing major sharehold-ers. The executive officers on the Board are CEO,CFO, and COO7). The shareholders Apax, Permira,Telenor, and Lockheed Martin each have one BoardDirector. Bjarne Aamodt, Senior Vice President ofTelenor, is the Board Director from Telenor.

The executing company Inmarsat Ltd performs thesame functions as the Directorate in the originalINMARSAT. The change from the INMARSATCouncil with 22 members, to the Board of InmarsatGroup Holdings Ltd, with seven directors, is but oneillustration of the profound change which has takenplace. It is comfortable to observe that the business ofInmarsat Ltd has continued to be financially verysound in the new situation of fierce competition.

Part C Telenor as investor in theOrganization, and as exploiter ofthe space segment

17 The Nordic Earth Station at Eik in

Rogaland

The purpose of NTA’s efforts in the creation ofINMARSAT was obviously to provide its allocatedshare of necessary funds to INMARSAT for estab-lishing the space segment, and to utilize the spacesegment to provide high quality telecommunicationservices to the maritime industry around the clockin all seas and oceans of the world.

Norway through NTA has dutifully fulfilled all itsobligations under the Convention and the OperatingAgreement, including payment of its investmentshare. The first investment called for in 1981–82amounted to some NOK 100 million, which was thehighest investment ever by Norway in space tech-nology.

In order to produce services an earth station “onland”8) is required. The earth station is very costly,mainly due to the complicated arrangement foraccessing the space segment on a demand basis. Thetotal traffic volume in question was rather low, sorouting the traffic over a remotely located stationimplied insignificant expenses compared to the cost

7) CEO: Chief Executive Officer; CFO: Chief Financial Officer; COO: Chief Operating Officer.8) “Earth station”, as opposed to space segment, is the term used to designate stations located on the surface of the earth; i.e. also ship

stations. So we have “earth stations on land” and “ship earth stations”. The last term may sound strange, but so it has been defined.

ISSN 0085-7130 © Telenor ASA 2004

Page 111: 100th Anniversary Issue: Perspectives in telecommunications

109Telektronikk 3.2004

of constructing and operating an earth station. Thisagain called for international cooperation. A studyconducted by STSK9) concluded that it was not eco-nomical to construct more than one station to serveall the Nordic countries. For economic reasons thestation should be located at one of the existing satel-lite earth stations, viz. Tanum in Sweden or Eik inNorway. Since Norway by far would have the highesttraffic volume over the station, it was decided tolocate a Nordic station at Eik. An agreement to thiseffect was signed late 1979. NTA should invest in,and operate, the station, whereas the other Nordiccountries should pay parts of the operating expenses,including capital cost, proportional to their respectiveuse. The net profits of the maritime communicationsservices utilising Eik were split in a similar propor-tionate way. For a period of many years, the Eik earthstation was instrumental in the Nordic countries’ abil-ity to turn the provision of mobile satellite servicesinto a commercial success.

The Eik Nordic operation was terminated at the endof 2001, at which point Telenor remained sole ownerand operator of the station.

Procurement of the earth station was based on inter-national competition. Only two bids were received:from NEC (Japan) and EB/NERA. Both were tech-nically acceptable. EB/NERA was chosen, having aslightly lower price. A Norwegian supplier was con-sidered very advantageous. The tri-party cooperation

of NTA/EB NERA/ELAB described in chapter 2helped EB/NERA to be able to come up with a com-petitive bid for this large and complicated undertak-ing. The Nordic earth station at Eik was put in opera-tion early 1982, as the first one in the INMARSATsystem in Europe.

In the years which followed, NERA captured about25 per cent of the world market for maritime landearth stations as well as for ship stations. This repre-sented a successful industrial spin-off of the earlytri-party studies initiated by NTA.

18 The Tri-party Earth StationCooperation between Norway,UK, and Singapore, with stationsEik, Goonhilly, and Sentosa

International cooperation in maritime satellite com-munications required yet another dimension. Thespace segment consisted at the outset of three satel-lites located over the Atlantic (AOR), the Pacific(POR) and the Indian (IOR) oceans, respectively,which provided coverage of all important oceanareas. To provide near world coverage (polar regionscannot be reached with geostationary satellites), landearth stations obviously need to “see” all three satel-lites. Eik can see AOR and IOR, so cooperation withan earth station owner in the Pacific area was neces-sary. It made very good economic sense to cover onlythe IOR from Eik, and let Goonhilly in the UK cover

Figure 7 Eik earth station (Photo: Ole Gunnar Mosvold)

9) STSK = the Scandinavian Committee on Telecommunication Satellites

ISSN 0085-7130 © Telenor ASA 2004

Page 112: 100th Anniversary Issue: Perspectives in telecommunications

110 Telektronikk 3.2004

AOR for the Nordic countries, whereas Eik handledBritish traffic destined for the IOR. With Sentosa ofSingapore included in the cooperation, we allachieved world coverage at minimum cost. A similarmodel was later adopted by other groups of countries.

We may observe that this kind of cooperation wastraditionally unheard of within the maritime commu-nity. It was a serious prestige matter for maritimecountries to have their own HF coast stations. Thehard economic facts in the case of maritime satellitecommunication forced countries to cooperate also inthis matter.

Eik was one of the most successful stations in theINMARSAT system – at times the best – with hightraffic volume, excellent quality and regularity. Theprovision of telecommunication services over thestation became a lucrative business for Telenor, withannual revenues in the order of NOK 7–800 mill.

With the Inmarsat industry becoming increasinglyderegulated and commercialised, there has been a highdegree of consolidation amongst the major players.Some of the these players have through acquisitions orstrategic alliances achieved global coverage; i.e. theynow “see” all satellites in the INMARSAT system fromtheir own or controlled ground infrastructure. This isone of the reasons for the termination of the coopera-tion with the UK and Singapore in the late 1990s.

19 Acquisitions

In 2001 Telenor took another bold step in mobilesatellite communications with the acquisition ofComsat Mobile Communications (CMC). ComsatMobile, with revenues in excess of US$ 100 mill, on27 May 2001 became a subsidiary of Telenor, underthe name Telenor Satellite Services Inc. The transac-tion transferred CMC’s operations, including allemployees as well as the two earth station facilities inSouthbury, Connecticut, and Santa Paula, California,to Telenor.

With the CMC acquisition Telenor has become thelargest provider in the world of mobile satellite com-munications. Telenor now offers satellite communi-cations customers worldwide a broad portfolio oftruly seamless global services. In February 2001Telenor also acquired the Brussels, Belgium, basedAccounting Authority SAIT Communications(SAIT). With the acquisition of SAIT, Telenorstrengthened its position in the retail, or end user,segment of the Inmarsat industry.

Annex 1 Historical summary

• National studies on how to make use of satellitetechniques for maritime communications startedin the second half of the 1960s.

• International studies were also started around thistime – in the International Radio ConsultativeCommittee of ITU (CCIR) and the Inter-Govern-mental Maritime Consultative Organization(IMCO, later IMO).

• The World Administrative Radio ConferenceWARC (Space) allocated frequency bands to theMaritime Mobile Satellite Service in 1971.

• IMCO set up the Panel of Experts on MaritimeSatellites in 1972 to study the technical, opera-tional, administrative, and institutional aspects ofa maritime satellite system. The Panel held sixmeetings over the 1972 to 1974 period. It recom-mended, inter alia, that INMARSAT be created asan intergovernmental organization and that a con-ference be convened to effect this. The Panel’sreport contained a complete draft convention forsuch an organization.

• IMCO convened The International Conference onthe Maritime Satellite System (the ‘INMARSATConference’), which held three sessions betweenApril 1975 and September 1976.

• The Conference adopted and opened for signaturethe Convention and the Operating Agreement onthe International Maritime Satellite Organization(INMARSAT). The Final Acts of the Conferencewere signed by representatives of 46 States. Asummary of the INMARSAT instruments is givenin Annex 2. Both instruments entered into force inJuly 1979, 60 days after States representing morethan 95 per cent of the initial investment shareshad become Parties to the Convention.

• The INMARSAT Conference established a Pre-paratory Committee, which was active from Jan-uary 1977 to the summer of 1979.Figure 8 The headquarters of Telenor Satellite

Services (TSS) US operation at Rockville, Maryland

ISSN 0085-7130 © Telenor ASA 2004

Page 113: 100th Anniversary Issue: Perspectives in telecommunications

111Telektronikk 3.2004

• The US consortium MARISAT established a mar-itime satellite system in 1976. The two MARISATsatellites later became part of the first generationINMARSAT system.

• Considerable preparatory work for INMARSAT’sspace segment had been carried out over a periodof some two years under what was termed the JointVenture Conference.

• A Nordic earth station was opened for service early1982 at Eik in Rogaland. This was the first Euro-pean earth station in the INMARSAT system.

• A new Global Maritime Distress and Safety Sys-tem (GMDSS), based on satellite communicationsover the INMARSAT system, was introduced byIMO in 1992.

• The intergovernmental organization INMARSATwas privatized in 1999.

• Telenor purchased Comsat Mobile Communica-tions (USA) in 2001 and became the largestprovider in the world of mobile satellite communi-cations.

Annex 2 Summary of the (original)

INMARSAT agreements

The organization INMARSAT was based on twoagreements: a Convention between participatingStates (“Parties”) and an Operating Agreement be-tween telecommunication organizations of the mem-ber States (“Signatories”). Article 2 of the Conven-tion provides that each member State or Party shalleither itself sign – or shall designate a competententity to sign – the Operating Agreement.

Article 4 of the Convention states that the Party isnot liable for obligations arising under the OperatingAgreement. In other words, the State is not responsi-ble for financial, economic and operational matters,but the Party shall guide and instruct the Signatory toensure that it fulfils its responsibilities and does notviolate obligations which the Party has undertakenunder the Convention.

The purpose of INMARSAT, as given in Article 3 ofthe Convention, is “to make provision for the spacesegment necessary for improving maritime communi-cations, thereby assisting in improving distress andsafety of life at sea communications, efficiency andmanagement of ships, maritime public correspon-dence services and radiodetermination capabilities”.Also, INMARSAT shall act exclusively for peacefulpurposes.

Operational and financial principles of the orga-nization are stated in Article 5 of the Convention andsome of the articles of the Operating Agreement. TheOrganization shall operate on a sound economic andfinancial basis having regard to accepted commercialprinciples. Furthermore, the Organization shall befinanced by contributions of Signatories. Each Signa-tory shall contribute capital in proportion to its use ofthe space segment and shall receive capital repaymentand compensation for use of capital when revenuesallow such repayment.

Article 6 of the Convention provides that the Organi-zation may own or lease the space segment.

INMARSAT is open for membership by all States.Also ships of non-member countries may use thespace segment on conditions determined by theOrganization.

The Organization will have three main organs: theAssembly, where all member States are representedand have one vote each: the Council, with twenty-twoSignatories and voting-power in relation to invest-ment shares; and the Directorate headed by a DirectorGeneral. Substantive decisions are made with a two-thirds majority both in the Assembly and in theCouncil. The functions of the Assembly are to con-sider and review the general policy and long-termobjectives of the Organization and express views andmake recommendations thereon to the Council. Inaddition, the Assembly has some specific listed func-tions. All other decisions will be made by the Coun-cil. The Council will thus decide on all financial,operational and administrative questions.

The headquarters of INMARSAT will be located inLondon.

The procurement policy shall be such as to en-courage, in the interest of the Organization, world-wide competition in the supply of goods and services.To this end the Organization shall award contracts tobidders offering the best combination of quality, priceand delivery time.

The charges for use of the space segment shall havethe objective of earning sufficient revenues for theOrganization to cover its operating, maintenance, andadministrative costs, the provision of such operatingfunds as are necessary, the amortization of invest-ments made by Signatories, and compensation foruse of capital.

The sum of the net capital contributions of Signa-tories and of outstanding contractual capital commit-

ISSN 0085-7130 © Telenor ASA 2004

Page 114: 100th Anniversary Issue: Perspectives in telecommunications

112 Telektronikk 3.2004

ments of the Organizations is subject to a ceiling of200 mill US dollars.

In the initial phase, before the Signatories’ use of thespace segment is known, the negotiated investmentshares are related to expected use of the space seg-ment and are specified in the Annex to the OperatingAgreement. The Annex to the Operating Agreementalso contains a provision which makes it possible forSignatories to increase the initial shares before entryinto force of the Convention. This provision wasadopted by the INMARSAT Conference in order toensure that the instruments would enter into forceeven if some countries due to lengthy domestic pro-cedures were not able to become Parties within thetime limit specified in Article 33 of the Convention.

Those who are familiar with the INTELSAT Agree-ments will observe substantial similarities betweenINMARSAT and INTELSAT. Indeed, INTELSATwas to a fairly large extent used as model, althoughmany delegates believed that the INTELSAT Agree-ments are unnecessarily complicated. The texts of theINMARSAT Agreements are more concise and arebelieved to be clearer and easier to read for non-experts.

Bibliography

1 Collett, J P. Making Sense of Space – a History ofNorwegian Space Activities. Oslo, ScandinavianUniversity Press, 1995.

2 Gallagher, B. Never Beyond Reach – The Worldof Mobile Satellite Communications. London,INMARSAT, 1989.

3 Haga, O J. INMARSAT – an example of GlobalInternational Cooperation in the Field of Telecom-munications. Telektronikk, 75 (4), 373–381, 1979.

4 Solbakken, K. INMARSAT Nordisk maritimejordstasjon på Eik. Telektronikk, 77 (1), 24–27,1981.

5 Solbakken, K. INMARSAT – Systembeskrivelse.Telektronikk, 77 (1), 21–23, 1981.

6 Grimsmo, N, Bøe, A. INMARSAT – Interna-tional Maritime Satellite Organization. Telektron-ikk, 77 (1), 16–20, 1981.

7 Knudtzon, N. Telesatellitter – et helhetsperspek-tiv. Telektronikk, 86 (2/3), 81–92, 1990.

8 Haga, O J. Mobil satellittkommunikasjon – Per-spektiver for 1990-årene. Telektronikk, 87 (2/3),231–236, 1991.

9 Telenor completes acquisition of Comsat MobileCommunications from Lockheed Martin. Pressrelease by Telenor/Lockheed Martin, December2001.

10 Vonau, J W. Restructured INMARSAT and publicservice obligations. London, IMSO, 2004. (Inter-nal document of IMSO.)

11 Inmarsat Group Holdings Ltd. October 6, 2004[online] – URL: www.inmarsat.com.

12 Interview with Bjarne Aamodt, Senior Vice Presi-dent of Telenor, 27.09.2004.

13 Interview with Knut Grødem, Telenor SatelliteServices, on 29.09.2004. (Assisted with para-graphs 14, 17, 18 and 19.)

Ole Johan Haga (71) holds a BSc (Honours) degree in Electronic Engineering from University of St. Andrews,

Scotland, 1960. He has worked with Telenor for 40 years in various positions. After his retirement in 2000

he has been available as senior consultant. His main responsibilities and positions have comprised:

• 10 years of duty in the construction of the national radio relay link network

• various responsibilities in the Technical Department of NTA Headquarters

• international activities in various Nordic committees, ITU, CEPT, IMO, and INMARSAT. Chairman of

INMARSAT Council 1984–86

• regional director of Oslo 1985–95

• international activity, mainly on investment projects in mobile networks in South East Asia and Eastern

Europe

Ole Johan Haga’s managing positions include: Head of Radio Link Department of NTA Headquarters,

Assisting Technical Director, Vice President of Telenor International, Managing Director of Telenor Satellite

Services AS, Managing Director of Telenor Invest AS.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 115: 100th Anniversary Issue: Perspectives in telecommunications

113

1 IntroductionThe concept of computer networking started in theearly 1960s at the Massachusetts Institute of Technol-ogy (MIT) with the vision of an “On-line communityof people”. Computers should facilitate communica-tions between people and be a support for humandecision processes. In 1961 an MIT PhD thesis byLeonard Kleinrock introduced some of the earliesttheoretical results on queuing networks. Around thesame time a series of Rand Corporation papers,mainly authored by Paul Baran, sketched a hypotheti-cal system for communication while under attack thatused “message blocks” each of which contained anaddress to identify the destination. In the latter half ofthe 60s, these ideas had got enough momentum forthe U.S. Defense Advanced Research Project Agency– ARPA (later renamed DARPA) – to initiate devel-opment and fund research on this new promisingcommunications technology now known as packetswitching. A contract was awarded in December1968 to Bolt, Beranek and Newman (BBN) todevelop and deploy a four node network based on thistechnology. Called the ARPANET, the initial fournodes were fielded one a month from September toDecember 1969. The network quickly grew to com-prise academic research groups across the UnitedStates, including Hawaii, and in 1973 also extendedto Norway and England. During the early 1970s,DARPA developed two alternate implementations ofpacket switched networks – over satellite and groundradio. The protocols to link these three networks andthe computers connected to them, known as TCP/IP,were integral to the development of the Internet. Theinitial nascent Internet, consisting of those three net-works, was first demonstrated in 1977, although ear-lier two network tests had undoubtedly been carriedout. Through independent implementations, extensivetesting, and refinements, a sufficiently mature andstable internet technology was developed (with inter-national participation) and in 1980 TCP/IP wasadopted as a standard for the US Department of

Defense (DOD). It is uncertain when DoD reallystandardized on the entire protocol suite built aroundTCP/IP, since for several years they also followed theISO standards track.

The development of the Internet, as we know it today,went through three phases. The first one was theresearch and development phase, sponsored andsupervised by ARPA. Research groups that activelycontributed to the development process and manywho explored its potential for resource sharing werepermitted to connect to and use the network. Thisphase culminated in 1983 with the conversion of theARPANET from use of its initial host protocol,known as NCP, to the newly standardized TCP/IPprotocol. Then we had the start of the interim phase.All hosts on the ARPANET were required to convertto TCP/IP during early January 1983, but in realitythe conversion lasted until June 1983, during whichtime both the old protocols and the new protocolswere run simultaneously. ARPANET was dividedinto two parts interconnected by a set of filteringgateways. Most defense institutions were attached toone part called MILNET, which was to be integratedwith the Defense Data Network and operated by theDefense Communications Agency (DCA). The other– open – part, still called ARPANET, contained uni-versity institutions, non-defense research establish-ments and a few defense organizations includingDARPA. The newly reconstituted ARPANETremained in operation until 1990, when it was decom-missioned. By that time, responsibility for the openpart was taken over by National Science Foundation(NSF). NSF had created a small experimental net-work which was replace in 1988 by a higher speednetwork called NSFNET. The NSFNET was theresult of efforts by IBM, MCI and MERIT, the latterhaving their contract with NSF. Other organizationsalso provided funding for relevant parts of the Inter-net. And gradually many of the regional parts of thenetwork were privatized. The network was now, in

Features of the Internet historyThe Norwegian contribution to the development

P A A L S P I L L I N G A N D Y N G V A R L U N D H

Paal Spilling is

professor at the

Department of

informatics,

Univ. of Oslo

and University

Graduate Center

at Kjeller

Yngvar Lundh is

an independent

consultant

Telektronikk 3.2004

This article provides a short historical and personal view on the development of packet-switching,

computer communications and Internet technology, from its inception around 1969 until the full-

fledged Internet became operational in 1983. In the early 1990s, the internet backbone at that time,

the National Science Foundation network – NSFNET, was opened up for commercial purposes. At that

time there were already several operators providing commercial services outside the internet. This

presentation is based on the authors’ participation during parts of the development and on literature

studies. This provides a setting in which the Norwegian participation and contribution may be better

understood.

ISSN 0085-7130 © Telenor ASA 2004

Page 116: 100th Anniversary Issue: Perspectives in telecommunications

114 Telektronikk 3.2004

principle, open to anyone doing computer scienceresearch and international extensions were soon putin place. The number of attached institutions andusers grew rapidly. The ARPANET technology hadserved its purpose, was now being phased out andreplaced by higher-capacity lines and commercialrouters. This coincided approximately with theappearance of the very first Web-browser. The WorldWide Web was invented at CERN in 1989 and hasproved to be a major contributor to the usefulness ofthe Internet. A few years later, in 1993, the restric-tions on commercial activity in the NSFNET werelifted, and the Mosaic browser was introduced by theUniversity of Illinois. This was the start of the thirdphase, the commercial phase, resulting in an explo-sive growth in geographic coverage, number of users,and traffic volume. Of course, email and file transferhad already been in place for two decades, but the useof web browsers made it easier to use and opened upa larger world of information access on a scale neverbefore seen.

For many years the Internet and its concepts wereneglected by the telecom operators and the otherEuropean research communities. In the last chapterwe attempt to shed light on some of the importantfactors contributing to this effect.

2 The prelude; the inception ofpacket switching

There has been a debate for some time about whoinvented packet switching; was it Kleinrock at MIT,Paul Baran at Rand Corporation, or Donald Daviesat the National Physics Laboratory in England?Donald Davies is recognized as the person whocoined the term packet. We do not take a stand here.We believe all three studied, from a conceptual view-point, different aspects of the store-and-forward tech-nology, a key concept behind packet switching. Weprovide a brief description of their research relevantto packet switching.

Leonard (Len) Kleinrock, a PhD student at MIT,published his first paper on digital network communi-cations titled “Information Flow in Large Communi-cation Nets”, in July 1961 [1]. This was the firstpaper describing queuing networks and analyzingmessage switching. He developed his ideas further inhis 1962 PhD thesis, and then published a compre-hensive analytical treatment of digital networks in hisbook “Communication Nets” in 1964 [2].

After completing his PhD in 1962, Kleinrock becameProfessor at UCLA. There he later established and ledthe Network Measurement Center (NMC), consistingof a group of graduate students working in the area of

digital networks. In October 1968, ARPA awarded acontract to Kleinrock’s NMC to perform ARPANETperformance measurements and identify areas for net-work improvement.

Paul Baran, an electrical engineer, joined RANDin 1959. The US Air Force had recently establishedone of the first wide area computer networks for theSAGE radar defense system, and had an increasinginterest in robust and survivable wide area communi-cations networks. Baran began an investigation intodevelopment of survivable communications net-works. The results were first presented in a briefingto the Air Force in the summer of 1961, and later, in1964, as a series of eleven comprehensive paperstitled “On Distributed Communications” [3, 4].

The series of reports described in remarkable detailan architecture for a distributed, survivable communi-cations network for transport of speech and data. Itwas based on store-and-forward of message units of1024 bits, dynamically adaptive routing, and couldwithstand serious destruction to individual nodes orlinks without loss of end-to-end communications. Atthe time, the technology to implement this architec-ture cost effectively did not exist. Apparently the AirForce did not see the value of this new concept at thattime and did not follow up the recommendations inthe report.

Donald W. Davies at the National Physical Labora-tory (NPL) in England, apparently unaware ofBaran’s ideas, developed similar concepts. He gothis original idea in 1965: to achieve communicationbetween computers by utilizing a fast message-switching communication service [5]. Long messageshad to be split into chunks, called packets, and sentseparately so as to minimize the risk of congestion.This was the same approach taken by ARPA, but theARPANET initially used the term “message switch-ing” and later adopted Davies’ terminology. Thestore-and-forward of packets became known aspacket-switching.

Davies proposed in 1967 a plan [6] for a communica-tions system between a set of terminals and a set ofcomputers. It was based on store-and-forward ofpackets in a mesh of switching nodes interconnectedby high-speed lines. Terminals were to be served byone or more interface computers. These interfacecomputers acted as packet assembly/disassemblybetween the network and the terminals. The practicaloutcome of the NPL activity was a local packet-switched communication network that grew in thecoming years, to serve about 200 terminals and adozen or so computers.

ISSN 0085-7130 © Telenor ASA 2004

Page 117: 100th Anniversary Issue: Perspectives in telecommunications

115Telektronikk 3.2004

3 ARPANET; the start of a new eraIn the latter half of the 1960s the packet-switchingconcept was mature enough to be realized in prac-tice. We introduce the four persons most influentialto the creation of ARPANET and subsequently theinternet. Dr. J.C.R. Licklider, around 1960, had thevision of a “Galactic network” and provided themain inspiration. Lawrence Roberts published thenetwork plan in 1967 and led the early developmentof ARPANET. When Roberts left ARPA in 1973,Robert Kahn took over the responsibility for thedevelopment process and brought it to its full fruitionover a period of more than ten years. He was assistedby Vinton Cerf in developing the TCP/IP protocols,the true heart of the internet. In our opinion these twopeople are the main inventors of the internet, butassisted by many individuals and research groups ina great collaborative effort.

Dr. J.C.R. Licklider did research on psychoacousticsat MIT in the late 1950s. He had the unusual educa-tional background as engineer and psychologist, andsaw early the need for computers in the analysis ofhis research results. Licklider joined Bolt, Beranekand Newman in Cambridge, Massachusetts in 1957to pursue psychoacoustic research. Here he was givenaccess to one of the first minicomputers, a PDP-1from Digital Equipment Corporation (DEC). Hedeveloped the vision of an “On-line community ofpeople”, expressed in a seminal paper in 1960 called“Man-Computer Symbiosis” [7, 8], in which hedescribed an interactive computer assistant that couldanswer questions, perform simulation, display resultsgraphically, and extrapolate solutions for new situa-tions from past experience. In this way, computerscould facilitate communications between people andbe a support for human decision processes. Thesewere quite futuristic ideas at that time, and was thetrue start of the project that later got named ARPANET.

Dr. J.C.R. Licklider was employed by ARPA in 1962as leader of the division that was later named “Infor-mation Processing Techniques Office” or IPTO. Thisoffice is tasked with initiating and financing ad-vanced research and development in information pro-cessing of vital importance to the American Defense.The military had a long tradition of partnership withuniversity research. Most of the basic research forDOD was performed in the academic arena. In Lick-lider’s days the office funded top-level academic sci-entists called “Principal Investigators”. Licklider leftthe IPTO office in 1964, but had a second term fromJanuary 1974 through August 1975.

Lawrence (Larry) Roberts, after finishing his PhDat MIT in 1958, joined MIT Lincoln Laboratories andstarted research on computer networks. He was

inspired by Licklider’s visions. In February 1965,ARPA awarded a contract to Lincoln Laboratory toexperiment with computer networking. In October1965, the Lincoln Labs TX-2 computer talked to theQ32 computer at System Development Corporation(SDC) in Santa Monica California, via a dial-up1200 bit/s phone line. This was one of the world’sfirst digital network communications between com-puters. The results of this research were presented byMerill and Roberts, at the AFIPS Conference in Octo-ber 1966, in a paper titled “Toward a CooperativeNetwork of Time-Shared Computers” [9]. In Decem-ber 1966 Lawrence Roberts was asked to join ARPAto lead the IPTO effort in developing a wide area dig-ital communications network, which was later namedthe ARPANET.

Larry Roberts based his network plans on the MITresearch performed by Kleinrock, the BBN researchof Licklider, and also on his own experience. He pre-sented his networking plan at the ACM Gatlinburgconference in October 1967 [10]. It contained plansfor inter-computer communications and the intercon-nection of networks. Roberts met with the NPLresearcher Roger Scantlebury during the conferenceand learned about their work and the work of Baran.Later Roberts studied the Baran reports and met withhim. The Baran work did not have any significantimpact on Roberts’ plan, according to Roberts’“Internet Chronology” (http://www.ziplink.net/~lroberts/InternetChronology.html). The NPL paper[6] convinced Larry Roberts to use higher speed lines(50 kbit/s) between the nodes and use the wordpacket.

Larry Roberts left ARPA in October 1973 to becomethe second President of Telenet, providing a commer-cial data communication service based on the X.25standard. He was followed for a brief period by Dr.J.C.R. Licklider as director of the IPTO office. In1979 Telenet was sold to GTE, to become the datadivision of SPRINT.

Larry Roberts has been the recipient of numerousawards. He shared the Charles Stark Draper Prize for2001 with Robert Kahn, Vinton Cerf, and LeonardKleinrock for their work on the ARPANET andInternet.

Robert (Bob) Kahn obtained his PhD degree fromPrinceton University in 1964. He then worked withthe Technical Staff at Bell Laboratories and subse-quently became Assistant Professor of ElectricalEngineering at MIT. Then he joined Bolt, Beranekand Newman (1966 – 1972), where he was responsi-ble for the system design of the ARPANET, the firstpacket-switched network, and wrote the technical

ISSN 0085-7130 © Telenor ASA 2004

Page 118: 100th Anniversary Issue: Perspectives in telecommunications

116 Telektronikk 3.2004

proposal to ARPA that won the contract for BBN.The research team, led by Frank Heart, proposed touse mini-computers as the switching element in thenetwork. The team consisted of Bob Kahn, SeveroOrnstein, Dave Walden and others. After awardingthe contract to BBN, Bob Kahn wrote the Host –IMP technical specification.

In 1972 Bob Kahn was asked by ARPA to organize ademonstration of an ARPANET network, connecting40 different computers, at the International ComputerCommunication Conference in October 1972, makingthe network widely known for the first time to techni-cal people from around the world [18]. Organizingthe demonstration was a major undertaking, to pushand coordinate all involved parties to get everythingto work reliably in time for the conference. Thedemonstration was a great success.

Bob Kahn moved to ARPA immediately after theconference, initially as program manager withresponsibility for managing the Packet Radio, PacketVoice and SATNET projects. He later became chiefscientist, deputy director and subsequently director ofthe IPTO office. While Director of IPTO, he initiatedthe United States government’s billion dollar Strate-gic Computing Program, the largest computerresearch and development program ever undertakenby the federal government. Dr. Kahn conceived theidea of open-architecture networking. He is co-inven-tor of the TCP/IP protocols and was responsible fororiginating ARPA’s Internet Program. Dr. Kahn alsocoined the term “National Information Infrastructure”(NII) in the mid 1980s, which later became morewidely known as the “Information Super Highway”.Kahn left ARPA late 1985, after thirteen years. In1986 he founded the Corporation for NationalResearch Initiatives (CNRI). CNRI was created as anot-for-profit organization to provide leadership andfunding for research and development of the NationalInformation Infrastructure. He has been the recipientof numerous awards and received several honoraryuniversity degrees for his outstanding achievements.In 1997 president Clinton presented the US NationalMedal of Technology to Kahn and Cerf.

As President of CNRI, Kahn has continued to nurturethe development of the Internet over the years throughshepherding the standards process and related activities.

Vinton (Vint) Cerf did graduate work at UCLA from1967 until he got his PhD in 1972. ARPA releasedthe “Request for Proposals” in August 1968. As aresult, the UCLA people proposed to ARPA to orga-nize and run a Network Measurement Center for theARPANET project. The team included among others:Len Kleinrock, Stephen Crocker, Jon Postel, Robert

Braden, Michael Wingfield, David Crocker, and VintCerf.

Vint Cerf’s interest in networking was strongly in-fluenced by the work he did at the Network Measure-ment Center at UCLA. Bob Kahn, then at BBN, cameout to UCLA to participate in stress-testing the initialfour-node network, and had a very productive collab-oration with Vint Cerf. Vint did the necessary pro-gramming overnight, and together they did theexperiments during the day.

In November 1972, Cerf took up an assistant profes-sorship post in computer science and electrical engi-neering at Stanford, and was one of the first peoplethere who had an interest in computer networking.The very earliest work on the TCP protocols wasdone at Stanford, BBN and University College Lon-don (UCL). The initial design work was done in VintCerf’s group of PhD students at Stanford. One of themembers of the group was Dag Belsnes from the Uni-versity of Oslo. He did work on the correctness ofprotocol design. The first draft of TCP came out inthe fall of 1973. A paper by Bob Kahn and Vint Cerfon internetting appeared in May 1974 in IEEE Trans-actions on Communications [20] and the first specifi-cation of the TCP protocol was published as an Inter-net Experiment Note in December 1974. Then thethree groups began concurrent implementations of theTCP protocol. So the effort at developing the Internetprotocols was international from the beginning.

Vint Cerf worked for ARPA from 1976 till 1982,having a leading role in the development of theInternet and internet-related technologies. In 1982 hebecame vice president for MCI Digital InformationService, leading the engineering of MCI Mail Sys-tem, the first commercial mail service to be con-nected to the internet. Then, in 1986, he became VicePresident of the Corporation for National ResearchInitiatives (CNRI), a position he held until 1994.Then he joined MCI, and is now senior vice presidentof Technology Strategy.

Vint Cerf has been the recipient of numerous awards,both nationally and internationally. In 1997, PresidentClinton presented the US National Medal of Technol-ogy to Cerf and Kahn.

ISSN 0085-7130 © Telenor ASA 2004

Page 119: 100th Anniversary Issue: Perspectives in telecommunications

117Telektronikk 3.2004

4 ARPANET; the research anddevelopment process

Here we go briefly through the research and develop-ment process, from the point in time where therequirements were formulated and the first researchcontracts were awarded, till the network was opera-tional and covered the continental USA and with two“tentacles” – to Hawaii and to Norway, and fromthere onwards to England.

The planned network consisted of two main compo-nents:

• The packet-switches or nodes (implemented onminicomputers), should be interconnected in amesh network by means of high-speed telephonelines and 50 kbit/s modems, to permit alternativeroutes between any sender-receiver pair. The inter-face between a node and a host was standardized toenable hosts of different makes and operating sys-tems to connect to the network.

• Each host computer was connected to its dedicatednode (communications front-end). The software inthe hosts should permit resource sharing and sup-port person-to-person communications.

To implement the plan, ARPA awarded contracts tothe following institutions in the last quarter of 1968:

• Bolt, Beranek and Newman (BBN) in Boston;Frank Heart led the group with responsibility todevelop the packet-switching nodes (called Inter-face Message Processors, or IMPs), deploy them,and to monitor and maintain the network;

• University of California Los Angeles (UCLA);Professor Len Kleinrock led the group with respon-sibility for performance studies of the network bymeans of simulation and real measurements;

• Network Analysis Corporation (NAC); HowardFrank and his team with responsibility for develop-ing the network topology subject to cost and relia-bility constraints, and for analyzing the networkeconomics;

• Stanford Research Institute (SRI); to establish aNetwork Information Center (NIC) as part of DougEngelbart’s group.

The initial four IMPs were fielded at the end of 1969.The first one was installed at UCLA in September.Due to Len Kleinrock’s early theoretical work onpacket switching and his focus on network analysis,design and measurements, his Network MeasurementCenter (NMC) was selected to be the first host on the

network. The next node was installed at SRI in Octo-ber. Doug Engelbart led a project called “Augmenta-tion of the Human Intellect” at SRI, which includedthe early hypertext system NLS. NLS was the firstadvanced collaborative oN-Line text System [11],and included many of the modern text editing func-tions like mouse, cut-and-paste, hypertext, and a win-dow-based user interface. In fact, many of the vision-ary concepts demonstrated by Engelbart in 1968really became practical many years later when per-sonal workstations interconnected in networksbecame economically feasible. Doug Engelbart alsohad a journaling system under development. It wasintended to be the basis for the Network InformationCenter (NIC). Dick Watson was the leader of thistask initially.

Soon after SRI was on the network, the first host-to-host message was sent from Kleinrock’s group atUCLA to SRI.

Subsequently, in 1972, NIC was established as a sep-arate project at SRI, with Elisabeth (Jake) Feinler asleader. NIC should maintain hostname-to-addressmapping tables (in use in 1970) and be a repositoryfor network-related reports and documentation(Requests for Comments, etc).

Two other IMPs were then installed, one at UC SantaBarbara and one at the University of Utah. These twogroups did research in visualization; visualization ofmathematical functions at Santa Barbara and 3-Dvisualization at Utah.

A key feature of the network was to use dedicatedcomputers, called packet switches, interconnected ina mesh network and responsible for the transport ofdata in the form of packets. The network should berobust against link and/or node failures. Hence themesh network should provide alternative routes whenforwarding packets, to circumvent failures in the net-work.

Another key feature of ARPANET was the use of anetwork control center – NCC. NCC had the abilityto monitor each node in the network, start and stopnode interfaces, start and stop nodes, perform diag-nostic tests of individual nodes and lines, and down-load new software into nodes from NCC. This madeARPANET a very powerful laboratory for studyingand developing networking technology, all withoutrequiring personnel to travel to all the greatly sepa-rated sites. It should be noted that NCC served twopurposes, to be an efficient tool in the developmentprocess and to manage and maintain the network. Itwas always an important goal in the development

ISSN 0085-7130 © Telenor ASA 2004

Page 120: 100th Anniversary Issue: Perspectives in telecommunications

118 Telektronikk 3.2004

process to arrive at a technology that would not needcentralized control.

While the performance of the network was analyzedand measured, the Network Working Group led bySteve Crocker at UCLA worked intensely to finishdesigning the host-to-host protocol called “NetworkControl Program” (NCP) [12]. NCP was part of thenode software and provided a standardized interfaceto the attached hosts. The design was completed inDecember 1970, and was then implemented andinstalled in the increasing number of nodes during the1971/72 period. This enabled the development of thelong-awaited user services (host applications) like“Telnet”, File transfer (FTP), and electronic mail.Ray Tomlinson at BBN implemented the initial elec-tronic mail system on the ARPANET, using the nowwell-known address notation user@host [13]. Animproved mail management program was writtenusing TECO macros by Larry Roberts in 1972, forreading, storing, forwarding and sending mail.

After the initial test period, the ARPANET started togrow [14], see Figure 1. It spanned across to the EastCoast in December 1970, with a total of 13 packet

switches with numerous attached hosts. By 1975 itconsisted of about 50 IMPs and between 150 and 200hosts, permitting up to four hosts per IMP. The net-work interconnected defense establishments, andresearch institutions and universities with defensecontracts scattered all over the USA. The continentalpart of the network had two “tentacles”, one toHawaii and one to Kjeller/Norway and onwards toLondon/England. As can be seen from Figure 1, allIMPs, excluding those in Hawaii, Norway and Lon-don, were interconnected with at least two neighborIMPs for reliability purposes. In 1975, the operationsof the continental part of ARPANET was transferredby ARPA to the Defense Communications Agency,while the responsibility for policy, further researchand the international extensions continued to remainwith ARPA.

5 Internetting was part of the visionfrom the very beginning

In parallel with the ARPANET development in the70s, ARPA also initiated and funded development ofmobile communications for tactical purposes basedon portable radio units and packet-switching tech-nologies, and packet switched satellite communica-tions for wide area coverage.

Before he left the ARPA office, Larry Roberts hadinitiated the development of a packet switched satel-lite network (SATNET) by funding BBN to build theground station nodes, the satellite IMP. When BobKahn moved to ARPA in late 1972, he initiated thedevelopment of a communicatitons network basedon mobile radio-based units, called the Packet RadioNetwork (PRNET). Later, when Larry Roberts leftARPA, Bob Kahn took over the management of theARPANET project. He also significantly changedthe nature of the packet satellite effort – the originalsatellite node was split into three units: the IMP,the SIMP and a gateway (now router) in between.Although this was relatively straightforward techni-cally, it was a non-trivial accomplishment politicallyand required the internet architecture to guide it.

The intention was to interconnect PRNET and SAT-NET with ARPANET. The ARPANET was viewedas a terrestrial backbone network. The PRNET was abroadcast radio network interconnecting geographi-cally distributed clusters of packet-radio units, whileSATNET was believed to be a means to interconnectwidely dispersed ARPANET-like networks, forexample located on different continents.

The PRNET development was mainly done as a col-laborative effort among many parties and led by BobKahn. The packet radios were built by Collins Radio

Figure 1 Some of the early stages of ARPANET

December 1969

(a)

(c)(d)

(b)

September 1971 August 1972

September 1974

December 1970

ISSN 0085-7130 © Telenor ASA 2004

Page 121: 100th Anniversary Issue: Perspectives in telecommunications

119Telektronikk 3.2004

in the Dallas area, the packet radio stations were builtby BBN, and the whole system was assembled andtested in the San Francisco Bay Area, with SRI Inter-national as the leading party. The first field tests werestarted in 1975 [15]. A forerunner to the PRNET pro-ject was the radio-based Aloha system, conceived byProfessor Abramson and developed at the Universityof Hawaii by Norman Abramson, Frank Kuo andRichard Binder [16] with funding from ARPA. Thepurpose was to provide access for user terminals, ini-tially within range of the university, but later scat-tered over the Hawaiian archipelago, to a centralcomputing facility at the University of Hawaii. Itmade use of a common inwards radio channel, sharedon a packet basis between users in a random accessmanner. Another common radio channel was used forthe outwards direction to broadcast the response fromthe central computer to the various user terminals.The system became operational in 1970. The SAT-NET project was delayed till 1975/76 due to tariffand regulatory problems with INTELSAT that werelater solved by Bob Kahn [17]. The project had eightcollaborating partners, including Norway and Eng-land, and made use of one 64 kbit/s channel in theIntelsat IV satellite system shared between threeground stations – one in USA, one in England, andone in Norway (actually located at Tanum in Swe-den). Sweden took no part in the collaboration.

6 How did Norway get involved?There were several contributing factors leading up toinviting Norway to participate in the collaborativeeffort. The main initiating factor was probably LarryRoberts’ idea to connect ARPANET with the networkat NPL in England. We provide a brief descriptionof these factors, subjectively presented in order ofimportance, as we saw it. Thereafter we mention thekey persons and the work involved, in the USA(ARPA), England, and in Norway, in getting the col-laboration established and operational. In addition,we also had the NORSAR project which made work-ing with Norway desirable, since it already had a2.4 kbit/s line to the US that could be upgraded.

In late 1970 ARPANET covered a main part of conti-nental US with about 13 nodes. ARPA now showedinterest in linking ARPANET with the network atNPL. Larry Roberts made a proposal to DonaldDavies regarding the linking [18]. The proposal fromRoberts suggested that the UK’s share in the collabo-ration should be to provide a line from NPL to NOR-SAR at Kjeller. This was impossible for NPL to han-dle. England had just applied for membership in theEU. And NPL, as a governmental institution, had toturn its focus on European research issues. The resultwas that Donald Davies had to turn down the pro-

posal from Larry Roberts. It is also worth mentioningthat in a memo from Vint Cerf in 1973, a plan to linkup with CYCLADE in France was discussed. But itwas never realized.

Professor Peter Kirstein of University CollegeLondon (UCL) had expressed interest in joiningARPANET. Since Donald Davies was unable toaccept Larry Roberts’ proposal, Kirstein came up witha research plan that included the attachment of a largemainframe computer to ARPANET, to monitor andmeasure the academic traffic over the link to USA,via Tanum, and to participate in the planned satelliteproject [18]. The network topology now required aline from London to NORSAR, and Roberts sug-gested that the UK’s share in the collaboration shouldbe to provide a line from UCL to Kjeller. DonaldDavies supported this plan. ARPA accepted it, andwas prepared to install a TIP at UCL. Peter Kirsteinwas able to persuade British Telecom (BT) to offerfree of charge a 9.6 kbit/s line to Kjeller, initially forone year. This was sufficient for Peter Kirstein to tellRoberts to proceed with the plan, and in September1973 the UCL-TIP became operational on theARPANET. In 1974, the British Ministry of Defence(MoD) took over the cost of the line to NORSAR;somewhat later BT also offered free of charge a48 kbit/s line from UCL to the British ground stationat Goonhilly for the packet satellite connection andthe British part of the uplink to the satellite. BobKahn worked on the procurement of the packet satel-lite connection and made all the arrangements for theUK participation with John Taylor, then of theBritish MoD.

In 1965 contacts were established between ARPA’sNuclear Monitoring Research Office (NMRO) andthe Norwegian Defence Research Establishment(NDRE). A seismic detection facility was underinstallation at Billings in Montana, USA, and latersimilar installations were made at other places (Iran,Alaska and Korea). The close proximity to USSRmade Norway an attractive location for a seismicdetection facility, in connection with “The Compre-hensive Nuclear-Test-Ban Treaty”. Director FinnLied, research supervisor Karl Holberg, and researchscientist Yngvar Lundh participated on behalf ofNDRE. The result was the establishment of NORSAR(the NORwegian Seismic ARray), that became opera-tional in 1970.

NORSAR was funded by the Research Council ofNorway (NTNF at that time) with additional financialsupport from ARPA. The main processing center, theSeismic Data Analysis Center (SDAC) was locatedin Virginia. There were leased lines to all detectionfacilities from SDAC. The line from NORSAR to

ISSN 0085-7130 © Telenor ASA 2004

Page 122: 100th Anniversary Issue: Perspectives in telecommunications

120 Telektronikk 3.2004

SDAC went originally via the British satellite groundstation at Goonhilly and an underwater cable toNorway. When the Nordic satellite station at Tanumbecame operational in 1971, the line from SDAC wasrelocated to go via Tanum. The line was paid for byARPA. Until 1973, the line had a capacity of only2.4 kbit/s but was upgraded to 9.6 kbit/s thereafter.

So the main two reasons for inviting NDRE and theNorwegian Telecommunications Administration(NTA) to participate in the further development ofpacket switching were:

• The development of packet-switched satellite com-munications would profit from Norwegian partici-pation. The NORSAR array was seen to be a majorpotential user of the network. It was also assumedthat this work would be of interest to Norway as alarge shipping nation.

• The collaboration between ARPA, UCL andNDRE would make it substantially cheaper to linkup both UCL and NDRE to ARPANET, whenmaking use of the NORSAR – SDAC line.

Some other minor arguments may also have con-tributed to inviting NDRE to join:

• Previous contacts and the establishment of NOR-SAR in 1970 had contributed to a good relationshipbetween the ARPA office and NDRE.

• Yngvar Lundh of NDRE had been on a sabbaticalat MIT in 1958 in the same laboratory and at thesame time as Larry Roberts completed his PhD.They got to know each other. Years later LarryRoberts started to work for ARPA.

Larry Roberts, at that time the current director of theIPTO office at ARPA, and Bob Kahn visited NDREin the early fall of 1972 to discuss a possible partici-pation by NDRE in the further development of thepacket switching technology. Many aspects of thisnew form of communication were discussed at themeeting, with relations to a possible future Norwe-gian participation. Among other things, Roberts andKahn pointed out wireless communications, and morespecifically satellite communications, as importantfor Norway as a large shipping nation. They recom-mended that NDRE should attend the upcomingICCC meeting later in 1972, where a presentationand demonstration of ARPANET were to take place.Prior to the Norway meeting ARPA had contactedNTA – The Norwegian Telecommunications Admin-istration (now Telenor), but they declined to partici-pate.

Yngvar Lundh attended the ICCC meeting in Wash-ington DC [19] and was convinced of the great poten-tial in the applications of this technology. He decidedto join the development and participate in the plannedmultiple-access packet-switched satellite project. Hehad moral support from the director Finn Lied and theresearch supervisor Karl Holberg. Lundh establisheda small research group at NDRE, consisting of him-self and a few master students. He started to partici-pate in the regular ARPA project meetings. On 15June 1973 a terminal-IMP (TIP), on loan fromARPA, was installed on the premises of NORSAR.This location was selected for two reasons. Firstly, itwas important to have the TIP outside the restrictedarea of NDRE to permit other Norwegian groups toparticipate in the project. Secondly, to have easyaccess to the NORSAR-SDAC line for multiplexingthe TIP and NORSAR traffic over that line. The linecapacity was upgraded from 2.4 kbit/s to 9.6 kbit/s.

Paal Spilling started to work for NDRE in 1972 as aresearch scientist. He was a former nuclear physicistand joined full time with Lundh’s efforts in 1975. Hehad no knowledge in data communications and proto-cols and was given time to educate himself – partlyby following university courses and partly by practi-cal trials and errors. Paal Spilling became a highlyneeded addition of qualified manpower to Lundh’sgroup. One of his first assignments was to work inKirstein’s group at UCL for two months, for a flyingstart. UCL was at that time ready to start testing theirimplementation of the early version of the InternetProtocol, TCP. Since then, Paal has been a majorcontributor both to the development of Internet itselfand to other aspects of computer communications inNorway.

Bob Kahn, while at ARPA’s IPTO office, was eagerto get the satellite project started. The idea was to usea fixed 64 kbit/s SPADE channel in the INTELSATIV satellite, with one ground station at Tanum inSweden, one at Goonhilly in England, and one atETAM in West Virginia in USA. The SPADE chan-nel would be used in a time-shared modus betweenthe three ground stations in a modus called “Multi-destination half Duplex”. It was assumed that eachground station operator would pay for its part of theuplink to the satellite. This was a modus operandi theINTELSAT organization could not handle at thattime. As other telecom operators, they were used tothe mode “Single Carrier per Channel”, which meantthat the two end-points of a channel or line had to beowned by the same customer/operator. It took BobKahn between one and two years to convince INTEL-SAT to change their policy, to permit the new way ofoperating the channel and the ground stations; thisincluded also a new tariff for this operational modus

ISSN 0085-7130 © Telenor ASA 2004

Page 123: 100th Anniversary Issue: Perspectives in telecommunications

121Telektronikk 3.2004

[17]. The satellite project – SATNET – therefore wasdelayed until 1975/76.

In parallel with Bob Kahn’s effort to convinceINTELSAT, Yngvar Lundh had discussions with theResearch Department of NTA (NTA-R&D – nowTelenor R&D) about possible participation. Finallyhe was able to persuade them to participate in theplanned satellite project, but only as observer. BobKahn had argued strongly that the line from Kjellerto Tanum should have a capacity of at least 50 kbit/s,since it would transport traffic between USA andLondon, NORSAR and NDRE. He was afraid that alow capacity of that line might constrain the perfor-mance too much and thus bring the packet switchingtechnology in discredit. NTA-R&D was now willingto provide free of charge two lines from Kjeller toTanum, a 48 kbit/s line and a 9.6 kbit/s line. Thehigher-capacity line should be used for the SATNETexperiments, while the 9.6 kbit/s line would replacethe Norwegian part of the existing line betweenNORSAR-TIP and the SDAC-IMP in Virginia. Theinstallation order went out on December 23, 1976. Inaddition, NTA permitted a satellite-IMP (SIMP) to beinstalled inside the Tanum ground station, and to pro-vide free of charge the 64 kbit/s satellite uplink. Thisagreement was initially for one year, but was laterprolonged till the end of 1980.

In connection with the Norwegian participation therewere plans to attach NORSAR’s two IBM-360 sys-tems and RBK’s (Regneanlegget Kjeller-Blindern)Cyber-74 to NORSAR-TIP in addition to NDRE’scomputer laboratory. NORSAR’s two systems wenton the air in 1977, about four years after NORSAR-TIP was installed, while the Cyber-system was neverattached.

In 1975 Yngvar Lundh started planning the attach-ment of NDRE’s computer laboratory to NORSAR-TIP. When Paal Spilling was back from the two-month stay at UCL he started the detailed planningon how to connect NDRE’s SM-3 computer to NOR-SAR-TIP. Towards the summer of 1976 the connec-tion was working. This effort will be described inChapter 8.

Some further details of the developments in Comput-ers and Communications were reported in [20].

7 From ARPANET to INTERNETThe initial internet concepts were published in May1974 [21] by Vint Cerf and Bob Kahn. Based on thetechnology behind the three networks, ARPANET,PRNET and SATNET, all funded by ARPA, BobKahn got the vision of an open architecture network

model: any network should be able to communicatewith any other network, independent of individualhardware and software requirements. It included anew protocol, replacing the NCP used in the ARPA-NET, and gateways (later to be termed routers) forthe interconnection of networks.

The design goals for the interconnection of networks,as specified by Kahn, were:

• Any network should be able to connect to any othernetwork via a gateway;

• There should be no central network administrationor control;

• Lost packets should be retransmitted;

• No internal changes in the networks should beneeded in order to enable their interconnection.

The original paper described one protocol, calledTransmission Control Program (TCP). It was respon-sible both for the forwarding and the end-to-end reli-able transport. In the following we will describe themain events converting ARPANET into being partof the INTERNET.

The initial three contracts to develop TCP wereawarded by ARPA to Stanford University (SU),where Vinton Cerf was a new assistant professor, toBBN (Ray Tomlinson) and to Peter Kirstein’s groupat University College London (UCL). As mentionedpreviously, the Stanford group was responsible fordeveloping the initial specifications for TCP. Theearly implementations at SU and UCL were field-tested between one another in 1975 to support thework on the specifications. As a result of extensivetesting, the TCP specifications went through severaliterations. It also turned out that the TCP protocolwas not modular enough to support certain protocolrequirements, such as those needed for packet voice(real-time requirements). Speech traffic has sufficientredundancy, so it is far less serious to lose a speechpacket now and then, than to retransmit lost packetsby the TCP protocol and thereby increase the play-out delay at the receiving side. It was decided inMarch 1978 [22] to split TCP into two parts. One part(IP – the Internet Protocol) was responsible for thenetworking aspects such as addressing, routing andforwarding, while the other part (TCP – TransmissionControl Protocol) was responsible for end-to-endrequirements – mainly reliability and flow control.The splitting permitted the development of a simpletransport protocol, called the User Datagram Protocol(UDP), interfacing with IP and living side-by-sidewith TCP.

ISSN 0085-7130 © Telenor ASA 2004

Page 124: 100th Anniversary Issue: Perspectives in telecommunications

122 Telektronikk 3.2004

After the IP-TCP split, other parties started imple-menting TCP/IP under popular operating systemssuch as TENEX, TOPS20, and to integrate it withcommunication services like TELNET, FTP andemail. BBN’s version of TCP/IP was integrated intoBerkeley’s ARPA-supported work on UNIX. BBNalso did the initial work to develop internet gateways.

So early in the 1980s a mature suite of protocols hadbeen developed. Simultaneously several high-speedlocal area networks (LAN), graphical workstationsand IP routers had been developed within theresearch community. All the necessary ingredientswere now available to make the internet technologyproliferate. In addition the SATNET experiment wascompleted, and the system was now being used on asemi-operational basis to interconnect ARPANETwith LANs at UCL and NDRE. Two new Europeanpartners were attached to SATNET. DFVLR (Ger-man Air and Space Research Institute) was attachedto SATNET in the summer of 1980, via the satelliteground station in Raisting. A little later the researchinstitute CNUCE in Pisa, Italy, was also attached viaa ground station in Italy.

ARPA now initiated a transition plan to graduallyconvert the communications software in all importantARPANET hosts over to the internet suite of proto-cols. This transition was to be completed and thetransition in place by January 1, 1983. This actually

happened over a period of several months in early1983. Many of the hosts were then moved offARPANET and reconnected to LANs, as well asmany new workstations. ARPANET then served as abackbone, inter-connecting the collection of LANs,see Figure 2. This we call the INTERNET with capitalletters. The backbone was still managed by BBNunder contract with ARPA, and institutions wantingto connect to the INTERNET needed permissionfrom ARPA. This effectively limited the growth ofthe INTERNET for some time. In 1983 the internettechnology was sufficiently mature, and the wholecommunity had converted to TCP/IP. This enabledthe Defense Communication Agency (DCA) to splitoff all defense-related organizations into MILNETand integrate it with the Defense Data Network.

This was done to support non-classified defense oper-ational requirements. The other part of ARPANET,still called ARPANET, supported the needs of thegeneral research community. This open part waseventually taken over by National Science Founda-tion (NSF) (once their NSFNET had been establishedaround 1985) and other funding parties, and regionalinternets were gradually privatized or new ones estab-lished.

A first step in the process was to split ARPANETinto approximately two halves that were intercon-nected by a set of filtering routers, see Figure 2. The

Figure 2 A map of the INTERNET around 1986

ISSN 0085-7130 © Telenor ASA 2004

Page 125: 100th Anniversary Issue: Perspectives in telecommunications

123Telektronikk 3.2004

filtering routers would prevent unauthorized commu-nications initiated in the open part to penetrate intothe closed MILNET. Electronic mail was consideredharmless, and was permitted to flow freely both waysbetween MILNET and ARPANET. This was the firstexample of what later became known as “firewalls”.

NSF agreed to fund part of the infrastructure neededin the academic environment for the interconnectionof the Super-computer sites and to provide accessfrom universities to these facilities. This infrastruc-ture consisted of leased lines and routers. This was amore flexible, higher-speed solution than ARPANET,and the ARPANET became obsolete after a shortwhile. It was completely phased out in 1990. TheInternet’s structure as we know it today started then.

This transition took away the strict control performedby ARPA regarding permission to connect to theInternet. So from now on we spell the Internet withsmall letters. Academic institutions, research organi-zations and even research departments belonging toindustrial organizations, not only in the US but alsoin Europe, were permitted to connect to the Internet.NSF and European research funding shared the costof several leased lines between key centers in Europeand USA. The initial main Internet sites in Europewere The Center for Mathematics and Computer Sci-ence (CWI) in Amsterdam and CERN in Geneva,Switzerland. “Internets” grew up in most Europeancountries. In Scandinavia the four Nordic countriesDenmark, Finland, Norway and Sweden joined forcesand interconnected their growing national academicinternets via NORDUnet, with the hub in Stockholmand from there leased lines to CWI in Amsterdam andto CERN. This increased the growth-rate, with theresult that the whole Internet approximately doubledin size every 12–18 months, and in 5–6 years timecovered 50 plus countries and more than 20 millionusers. This development is important, however, butnot part of our presentation.

The current Internet is not owned or operated by oneorganization. The many pieces of it are owned bydifferent organizations and operated in a distributedfashion. It is therefore surprising how well it doesfunction. This must primarily be ascribed to therobustness of its protocols and routing mechanisms.

One of the motivations behind the introduction ofpacket switching was the need to share expensive andscarce computer resources among a large and geo-graphically distributed set of users. Such resourcescould for example be editors, program compilers anddebuggers, programs for scientific calculations andvarious database applications like document archiv-ing and retrieval. Hence two of the early services

being provided in ARPANET were TELNET andFTP. TELNET provided means to interface a localterminal with a program/application in a remote hostcomputer and make use of the concept of communi-cating virtual terminal entities in the local and remotecomputer and a negotiation procedure to select theright terminal options. FTP provided the ability totransfer files to and from a remote computer, eitherin binary or character-oriented mode. In addition tothese two services, electronic mail was also provided,and was soon to become the dominating service inARPANET, and later in the Internet. During the 70s,these services were steadily refined as experience wasgained in using them. These services have essentiallybeen the same since they were standardized in 1982.

As the Internet grew in size and more stored informa-tion/documents became available online, one neededhelp in locating documents – searching for titlesand/or keywords. Services such as “archie” and“gopher” were developed, and later “The Wide-AreaInformation Servers” (WAIS) permitting natural lan-guage searches among standardized database servers.

A very important boost to the Internet communitywas provided by the “World-Wide Web”. Originallyit was developed in 1989–91 by Tim Berners-Lee atCERN, the European Laboratory for Particle Physics,and released in 1991. The purpose was to provide theresearch staff with a universal means to disseminatetext, graphical information, figures and databaseinformation. It makes use of the concept of hypertext,i.e. non-sequential text. This concept in the form ofthe MEMEX machine, was first described by Van-nevar Bush in 1945 in the Atlantic Monthly article“As we may think”, and later refined as a conceptby Ted Nelson in the 1960s. It was Ted Nelson whocoined the term ‘hypertext’. Another early contributorto the development of hypertext systems was DouglasEngelbart, who had demonstrated working hypertext-links in a prototype application (NLS) at StanfordUniversity in 1968.

“Clicking” on one such link to a reference empha-sized in a document automatically opens a new con-nection to that reference. It may be anywhere in thenet, possibly in another computer in a different coun-try. The referenced document is retrieved and pre-sented on the user’s terminal, all in a seamless fash-ion. The “language” used is the “HyperText MarkupLanguage” – HTML – a subset of the “Standard Gen-eralized Markup Language” – SGML. It is used totag the various pieces of a document, so as to specifyhow it should be presented on the screen.

Two popular front-end clients – “browsers” –emerged. One called Mosaic, distributed free of

ISSN 0085-7130 © Telenor ASA 2004

Page 126: 100th Anniversary Issue: Perspectives in telecommunications

124 Telektronikk 3.2004

charge from the National Center for SupercomputingApplications – NCSA, in 1993/94. The other,Netscape, was the commercial version of Mosaic.These Web browsers have turned out to provide themost viable service on the Internet, and really set offan explosion in the traffic volume.

8 The Norwegian contributions tothe Internet development

This chapter provides a short summary of the Nor-wegian contributions to the collaboration with theARPA community.

The first Norwegian “host” on ARPANET

The part of the computer laboratory at NDRE rele-vant to our packet switching activities consisted ofa computer named SM-3, produced by KongsbergVåpenfabrikk. It had 64 kB of memory, one punch-card reader, a paper tape reader, and a fast lineprinter. No hard disk, no network interface, and nooperating system. Programs were written on IBMpunch cards, then assembled and linked, and thedebugged and loadable version punched out on papertapes to ease later loading of working programs.Assembler, linker and loader also had to be read infrom paper tapes.

The first task for Paal Spilling was to make a multi-tasking operating system for SM-3. Via the collabora-tion with ARPA, he got hold of a technical reportdescribing the ELF operating system, developed atSRI for the PDP-11/45 [23]. This was a good guide-line and a good help in the understanding of a multi-tasking system. The PDP-11/45 was a much moreadvanced system than the SM-3. Hence only the rudi-ments of the ELF functional constructs were applica-ble. After a substantial period of trials and errors,Spilling finally had a robust and reliable multi-task-ing system, with process scheduling, process-to-pro-cess communications, buffer management, and inter-rupt handling.

The work to get the SM-3 computer connected to theARPANET node at NORSAR started around summer1975, but was interrupted for two months bySpilling’s visit to UCL. The physical distancebetween the computer laboratory at NDRE and NOR-SAR-TP required us to make use of the “Very DistantHost Interface” (VDH-interface) on NORSAR-TIP[24]. An SM-3 VDH-interface was built. PaalSpilling had to design and program the correspondingdriver. The driver was integrated with the newlydeveloped multi-tasking system. After an intensivedebugging phase, the interface was operating cor-rectly and reliably. As a final test of the VDH-inter-face and the multi-tasking system, Spilling performed

a set of round-trip time measurements between SM-3and a number of nodes in the ARPANET. Theseresults were consistent with similar results performedelsewhere. Spilling also measured some specific fea-tures of the NCP protocol. The NCP protocol wasrunning in all ARPANET nodes at that time, and wasresponsible for fragmentation of messages into pack-ets at the entry node and reassembly of the packetsinto the original message at the destination node beforepresentation of the message to the attached host.

TCP implementation and testing

As mentioned previously, Paal Spilling stayed at Uni-versity College London (UCL) in September andOctober of 1975. There he had the pleasure of partici-pating in the first transatlantic tests of TCP. The testswere conducted between two independent implemen-tations, one done at Stanford University (ProfessorCerf’s group) and one at UCL (Professor Kirstein’sgroup). It was very exciting to observe that these twoimplementations were able to establish connectionsand exchange data, after some hectic debugging.Later, to demonstrate the robustness of the TCP pro-tocol, the line between LONDON-TIP and NOR-SAR-TIP was taken down for 10 – 15 minutes in themidst of transferring data between UCL and Stanford.Then the line was brought up again, and the two endsof the TCP connection continued happily the transferfrom where they had stopped, without losing data.

Back at NDRE, Spilling and a colleague started theimplementation of the early version of TCP [21, 25]on the SM-3 computer. After about half a year ofwork, it was decided to stop the implementation andmove over to a more modern system – the NorwegianNORD-10 computer. Looking back, this was proba-bly not a good decision. It would have been better tocomplete our implementation to get the satisfactionand experience in fulfilling this task. Starting to workwith the NORD-10 computer, it turned out to be moredifficult than expected. We had to get acquaintedwith a new operating system, design and build a newVDH-interface, and implement the driver under thenew operating system (SINTRAN). This was not atrivial task. Then, starting on the design and imple-mentation of TCP in the SINTRAN operating systemturned out to be difficult too. SINTRAN had a veryprimitive process-to-process communications (signal-ing) system. If a process received two signals, oneafter the other, the first one was overwritten and lost.Hence this was useless, and we had to invent somehacks to circumvent the problem. It was also next toimpossible to convince the software group at NorskData, responsible for the SINTRAN operating sys-tem, that this was an important deficiency of theiroperating system. It took a few years before that wasappreciated and corrected. But then it was too late for

ISSN 0085-7130 © Telenor ASA 2004

Page 127: 100th Anniversary Issue: Perspectives in telecommunications

125Telektronikk 3.2004

us. In addition SINTRAN was a complicated system,and made the implementation of TCP cumbersome.And after some time the work had to be stopped,unfortunately.

The SATNET project

In the time period 1976 through 1979, NDRE washeavily involved in the development of SATNET.The purpose of the SATNET project was to explorethe feasibility of operating a 64 kbit/s SPADE chan-nel in the INTELSAT system, common to a set ofground stations, in a packet switched modus. As men-tioned previously three ground stations were involvedin the project, one at Etam in West Virginia on theUS East Coast, one at Goonhilly at the English WestCoast, and the third one at the Nordic satellite groundstation at Tanum in Sweden, see Figure 3. To enablethe packet-switched operation of the satellite channel,so-called Satellite-IMPs (SIMPs) were installed in theground stations – interfacing with the SPADE chan-nel equipment [26]. Each SIMP was then intercon-nected, via a leased line, with a gateway computer –the ETAM-SIMP with a gateway at BBN in Boston,the Goonhilly-SIMP with a gateway at UCL, and theTanum-SIMP with a gateway at NDRE. The otherinterface of each gateway was connected to anARPANET node. As mentioned previously, the linefrom NDRE to Tanum and the satellite uplink werekindly offered free of charge by the NorwegianTelecommunications Administration (NTA) for theduration of the project. The capacities of the linesbetween the SIMPs and the gateways were in theorder of 50 kbit/s.

The SATNET research program, organized by BobKahn much as Larry Roberts had done for ARPANET,was performed as a joint effort between Linkabit Cor-poration in San Diego, University of California inLos Angeles (UCLA), Bolt, Beranek and Newman(BBN) in Boston, Communications Satellite Corpora-tion (COMSAT) in Gaithersburg, Maryland, Univer-sity College London (UCL), and NDRE in Norway[20]. Linkabit had the project’s technical leadership.BBN was responsible for the development of theSIMPs, including the various channel-access algo-rithms the participants wanted to test out. The projectparticipants met about four times a year, with themeeting location circulating among the participatinginstitutions.

The Norwegian contingent was headed by YngvarLundh, with Finn-Arve Aagesen and Paal Spilling aswork force. Aagesen was responsible for performingsimulation studies of the most promising channelaccess algorithm, the “Contention-based, Priority-Oriented Demand Access” algorithm (CPODA). PaalSpilling developed management software on the

SM-3 computer, to control artificial traffic generationin the SIMPs, and to fetch data collected in theSIMPs during measurements. Since the SM-3 com-puter did not have any storage medium, the easiestsolution was to dump the measurement data out onthe fast line printer. The analysis of the measurementsthen had to be performed by hand. Several accessalgorithms were studied experimentally, amongothers TDMA, Reservation-TDMA, and C-PODA[27, 28]. Measurements and simulations were alsoperformed by Kleinrock’s Network MeasurementsGroup at UCLA. Mario Gerla was a key person here.

Packet speech experiments and

demonstrations

NDRE participated in packet-speech experiments per-formed in 1978 – 1979 in collaboration with, amongothers, MIT Lincoln Laboratories just outside Boston.The packet speech activity was part of the SATNETproject. Lincoln Lab had developed a speech vocoder(voice coder and decoder), under contract with ARPA,providing a stream of 2.4 kbit/s digitized speech. The

Figure 3 Network configuration for the SATNET and packet speechexperiments

Intelsat IV A

SATNET

Etam TanumGoonhilly

BBN UCL

NDRE

Terminal

NDRE-

host-

SDAC London-T Norsat-T

ARPANET

v

Lincoln ISI UCLA

S S S

G G G

V

V

1 1 1

V = Vocoder

S = SIMP

G = Gateway

T = TIP

I = IMP

ISSN 0085-7130 © Telenor ASA 2004

Page 128: 100th Anniversary Issue: Perspectives in telecommunications

126 Telektronikk 3.2004

vocoder was interfaced with the PDP-11/45 at NDRE,with a similar arrangement at MIT Lincoln Lab, seeFigure 3, and later also at UCL. The PDP-11/45 actedthen not as gateways, but as speech packet assemblyat the sending side and packet disassembly and play-out via the vocoder at the receiving side. In additionthe PDP-11/45 contained a conference control pro-gram, developed by Lincoln Lab, that handed overthe “floor”, in a FI-FO queue manner, to the partiesindicating their whish to talk.

Paal Spilling performed a set of measurements toexamine the profile of packet-speech traffic [29]. Theprogramming of the PDP-11/45, performed in con-nection with the experiments, is a good example ofresource sharing, one of the driving forces behind theearly development of packet switching. The computerwas located close to Spilling’s office. The program-ming tools and the source code for the conferencecontrol program was located at a TOPS-20 machineat Information Sciences Institute (ISI) in Los Ange-les. Using a terminal in his office connected to NOR-SAR-TIP (see Figure 3), Spilling could log on to thecomputer in Los Angeles, write the necessary modifi-cations to the control program, and have it compiledand loaded across the network into the PDP-11/45next door. The downloading was facilitated by across-network debugger (X-NET), also in the TOPS-20 machine, and enabled Spilling to debug the modi-fied control program loaded into the PDP-11/45. Thiswas an exciting and fascinating experience.

NDRE participated in several packet-speech demon-strations. At one of the regular project meetings, heldat UCL, Yngvar Lundh made use of the conferencefacility and could participate in the meeting fromNorway simultaneously with someone at LincolnLab, i.e. three-way Internet speech conference. Thequality of the speech when compressed to 2.4 kbit/swas noticeably impaired, but packet transmissionthrough this early Internet connection worked finein real time.

A large packet-speech demonstration in 1983 is worthmentioning. Paal Spilling had then moved over toNTA-R&D. Speech traffic was exchanged betweenan airplane-carrier in the Pacific Ocean off the Cali-fornian coast and NTA-R&D at Kjeller. The commu-nications went via a PR-network, involving the air-plane-carrier, an airplane flying in the vicinity of theship, and a PR-node at SRI attached to the INTER-NET, then across the INTERNET to the East Coast,then across SATNET via the Tanum ground station toNTA-R&D. The purpose of the demonstration was toshow high-ranking military personnel onboard theairplane-carrier the feasibility of communicatingthrough an interconnection of different networks

(subnets), the plethora of sophisticated techniquesinvolved, and the usability of the Internet for commu-nicating data and digitized speech.

In 1979/80 Paal Spilling had leave of absence fromNDRE, and stayed with SRI International in MenloPark, California working on the ARPA-funded PacketRadio Network (PRNET). There he made a proposalto improve the software architecture in the PR-nodes[30] to have a better logical layering of the programstructure, performed experiments on packet-speechperformance with QoS-control [31], and suggested a“Busy Tone” method to overcome the “hidden termi-nal” problem in wireless communications [32].

Spilling was back at NDRE in the last quarter of1980. SATNET was now considered operational,and used to interconnect local networks at UCL andNDRE with ARPANET. UCL was also using it forthe total academic service traffic between the UK-SRCnet and ARPANET. Spilling made a proposal tothe Research Council of Norway, and obtained fund-ing for purchasing an LSI-11/23 computer. Throughhis ARPA connections he got all necessary softwarefrom Dave Mills [33] at the University of Delaware.This software package had the nickname “Fuzzball”,and contained the TCP/IP suite of protocols. The LSI-11/23 was interfaced with a Proteon Token-Ring net-work, together with the PDP-11/45. Hence this com-puter was the first real Norwegian Internet host.Spilling had obtained a class B network address(128.39.0.0) for this network, from the NetworkInformation Center (NIC).

Spilling and Lundh had no luck in convincing themanagement of NDRE to continue the packet-switch-ing effort. Apparently the internet technology, includ-ing Packet Radios, was too premature both for themanagement of NDRE and for the NorwegianDefense.

9 The first Norwegian InternetPaal Spilling left NDRE in the summer of 1982 tostart working for the Research Department of NTA.Being inside NTA, this enabled Spilling to create thefirst Norwegian internet and make that a part of theInternet.

NDRE showed no interests in exploiting the knowl-edge and experience obtained in the collaborationwith the ARPA community. Paal Spilling thereforeattempted to create interests among the research peo-ple at NTA-R&D, in spite of Yngvar Lundh’s previ-ous attempts. As a result of this attempt, Spilling wasinvited to move over to the R&D-department by oneof its research supervisors. He did so at the end of the

ISSN 0085-7130 © Telenor ASA 2004

Page 129: 100th Anniversary Issue: Perspectives in telecommunications

127Telektronikk 3.2004

summer of 1982, with the hope to be able tostrengthen the Norwegian effort. Unfortunately thisdid not happen – with one exception. We will comeback to this below.

In agreement with ARPA and NDRE, all internet-related equipment was moved over to NTA-R&D, sothat the collaboration could continue from this newlocation, but now without participation from NDRE.NTA was at that time a state-owned monopoly. Beinginside NTA gave Spilling several opportunities. Thefirst action was to purchase a VAX-750. Through hisARPA connection Spilling got permission to acquirethe Berkeley version of the UNIX operating system(4.1 bsd), although NTA-R&D had to pay a signifi-cant fee to ATT. The VAX was connected to a Pro-teon Ring network, the same with the PDP-11/45gateway to SATNET. In getting the UNIX system upand running, Spilling had very good help from HelgeSkrivervik, and from Tor Sverre Lande at the Depart-ment of Informatics at the University of Oslo.

From 1983/84 there was a growing interest at NTA-R&D in getting access to the Internet services. Theparadox was that this interest did not result in anyinterest in collaborating with the Internet community.There was also a desire to get access to the ATT-ver-sion of UNIX, hence a Pyramid machine runningboth versions of the UNIX operating system was pur-chased and installed. The Ringnet was now replacedby an Ethernet. In addition the PDP-11/45 gateway toSATNET was replaced by a Butterfly machine fromBBN (both hardware and software), on loan fromARPA. We now provided terminal access to thesetwo UNIX machines, so everyone in the researchstaff could get access to the standard Internet ser-vices. We also bought a few SUN and PERQ work-stations, but they were at that time too expensive tobe for everyone. In a few years’ time the whole labwas Ethernet-cabled and most people had their ownworkstations.

There was also a growing interest at the universitiesin Oslo, Bergen and Trondheim, to get access toInternet services. Being inside NTA-R&D, thisenabled Spilling to set up a 9.6 kbit/s line to theDepartment of Informatics at the University of Oslo.The line was terminated at the VAX at NTA-R&Dand at the VAX at Department of Informatics. TheSLIP protocol was used here to interconnect the twomachines. Later, in 1984/85, we installed own-devel-oped gateways (routers), based on equipment fromBridge Communications Inc, where Judy Estrin wastechnical director – a former student of Vint Cerf atStanford and with whom Paal had carried out the firstTCP tests a decade earlier. Through this connectionwe were able to get the software development tools

used by Bridge Communications. This enabled KjellHermansen, the only person besides Spilling atNTA-R&D interested in cooperating with the ARPAcommunity, to develop an IP-router package for theBridge boxes. When the router was operating reli-ably, Spilling requested NTA-R&D to set up lines tothe universities in Oslo, Bergen and Trondheim – andwith our home-built routers at each end of the lines.A few years later the Bridge routers were replaced bycommercial ones from Cisco. This was the first Nor-wegian academic internet, see Figure 4. It was man-aged for some years by Spilling. UNINETT, the aca-demic network operator funded by the Department ofEducation, took over the network in 1987/88, after asubstantial pressure from a strong group of academicusers requiring Internet access.

In the beginning there was no Domain Name Server.The mapping between host names and addresses wasdone via a local host file in each computer. The origi-nal host file was maintained by SRI-NIC. Every newhost on the Internet had to be reported to NIC. And atregular intervals (daily or so), one had to pull over thehost file from NIC and install it at the right place inthe Unix file system. The development of DNS inNorway started in 1985. Jens Thomassen at theDepartment of Informatics at the University of Oslowas responsible for maintaining the Norwegiandomain (.no), on behalf of NTA-R&D.

It is worth mentioning that there was an ongoingdebate for some time in the UNINETT communitywhether the transport network should be based on theIP protocol, international standards like X.25, or evenDECNET. This continued for probably a few years,until it was obvious that internet communicationswas the salient technology to focus on.

NTA-R&DBergen

Gwy

Gwy

Gwy

Gwy

Gwy

Gwy

Gwy

Trondheim

Oslo

Figure 4 The Norwegian Internet around 1986

ISSN 0085-7130 © Telenor ASA 2004

Page 130: 100th Anniversary Issue: Perspectives in telecommunications

128 Telektronikk 3.2004

10 Other technology driversIn addition to the ARPA-sponsored research, othervery important research activities gave a significantcollective momentum to the development of the Inter-net. In the following we will mention those which wefeel are the most important ones.

Bob Metcalfe outlined in his PhD thesis at Harvardthe basic idea for Ethernet. It was an extension of theAloha concepts developed by Norman Abramson atthe University of Hawaii. After obtaining his degree,Metcalfe joined XEROX-PARC in 1973, the researchcenter of XEROX in Palo Alto. He had the vision of“small” powerful personal workstations (Altos) inter-connected in high-speed local networks called Ether-nets. It is highly likely that this vision was inspiredby Doug Engelbart’s work on the NLS-system. Theinitial work on Ethernet was demonstrated in 1973[34]. A set of such Ethernets would later be intercon-nected into a company-wide network, spanningXEROX offices scattered all over continental US.The personal workstations should have the capabilityto be moved from one network and plugged intoanother one, following its owner on his/her journeyaround in XEROX. Therefore each workstation wasprovided with a unique Ethernet address, which couldbe used to locate and address its owner. When ARPAinitiated the work on TCP/IP, XEROX-PARC joinedin the experimentation and participated in the devel-opment process for some years until the internet con-cepts were mature enough for them to specify andimplement their own internet protocol suite. In1977/78 XEROX had implemented a company-widenetwork interconnecting about 25 Ethernets via a setof gateways and leased lines. Amazingly XEROX, asa commercial company, did not see any businesspotential in this technology at that time.

A little later, in 1979, IEEE started the standardiza-tion work on local-area networks (LAN). XEROX,DEC and INTEL (DIX) joined forces and proposed a10 Mbit/s Ethernet standard. Due to the large amountof fielded Ethernet interfaces and the significantweight of the DIX effort, the hardware/physical layerpart of the international standard was made compati-ble with the DIX proposal [35].

Another very important contribution to the usabilityand popularity of the TCP/IP suite of protocols wasthe further developments and refinements of theUNIX operating system done initially by Bill Joy atBerkeley, also under contract with ARPA. The hard-ware platforms used were VAX-750 and VAX-780.Part of this work included the integration of theTCP/IP suite of protocols, developed by BBN, intothe UNIX system [36]. To make the protocol packageuser-friendly, flexible and efficient, it was found nec-

essary to develop a set of support functions surround-ing the protocol suite. It was also relatively easy tomake device drivers for a variety of popular networkinterfaces, such as Ethernet, Proteon-Ring,ARPANET and X.25.

This work was very successful and has been an exam-ple for very many other implementations of the proto-col suite in other operating systems. Since only asmall part of the UNIX system depends on the com-puter’s hardware, it was relatively straightforward toport the system to other hardware configurations. TheBerkeley version of UNIX was made available to uni-versities free of charge and, for a fee to AT&T andapproval from ARPA, to non-educational researchorganization, not only in the US but also in Europe.

The flourishing of TCP/IP implementations at manyplaces in the US and Europe provided a large basisfor field tests and experimentation. Hence TCP andIP and the accompanying services like TELNET, FTPand email were steadily refined and improved. It wastherefore a mature, efficient and user-friendly set ofprotocols that ARPA proposed as a standard for theAmerican Defense in the middle of 1980 andapproved as a standard in mid 1982.

In the late 70s and early 1980s Stanford Universityhad an ARPA supported project to develop a meansof supporting VLSI design. It was to be connected toa nascent Stanford University Network. Here, underthe leadership of Forest Baskett and Andres Bechtol-sheim, they developed a Motorola-68000-based CPUcard and a frame buffer based display system. As aspin-off from this project, a few people left to start asmall company called SUN Microsystems which alsoinvolved Bill Joy from Berkeley. They ported anearly version of the Berkeley version of UNIX ontothis hardware and developed a graphical windowuser-interface to go with it, and offered this as a com-mercial product under the name SUN-1 in 1982. Thiswas supported by ARPA and managed by Bob Kahn.A little later another group formed a small companycalled CISCO, offering IP routers based on the sameCPU card from SU, also with support from Bob Kahnat ARPA.

11 From Resource Sharing toInformation Sharing

One driving force behind the development of packet-switching networks was to share expensive resourceslike computers, software, and communication facili-ties, as efficiently as possible among a large group ofusers. With the proliferation of PCs and services likethe web, the resource sharing got another meaning –the sharing of information.

ISSN 0085-7130 © Telenor ASA 2004

Page 131: 100th Anniversary Issue: Perspectives in telecommunications

129Telektronikk 3.2004

At the time when this evolution started, computerswere expensive, software programs were expensive,and communication lines were expensive. Much ofthe initial research focused on bridging the geograph-ical distances between the users at their terminals,and the computers with the valuable programs theywanted to use. Hence those expensive resourcesneeded to be shared among as many users as possible,and in such a way that the users should not need tophysically move to the computers to have their jobsdone. When email came into practical use, thisbecame the dominating service – still rather central-ized, but accessible from geographically distributedterminals. It is worth mentioning that Yngvar Lundhand Paal Spilling had their mailboxes on a host com-puter at Information Sciences Institute (ISI) in LosAngeles, and accessed from terminals connected toNORSAR-TIP. Keeping the resources centralizedmade the administration and maintenance of theresources straightforward and easy.

Then came the period with affordable minicomputers,where sets of terminal users were clustered aroundgeographically distributed but networked computers.Computers were still too expensive to be affordableas single-user workstations.

Later, with the proliferation of PCs, each user gotenough computing power for the daily work. Theresources were now distributed, which resulted in amore cumbersome administration and maintenance ofthese resources. In essence it was only the networksthat were shared among the users.

When the Web was developed and commerciallyavailable (1991/92), coinciding in time with the lift-ing of the restriction on commercial usage of theInternet, we saw an explosion in geographical cover-age, in number of users, and in traffic volume. Nowthe Internet gradually turned into an information shar-ing network. Information could now be stored at anylocation in the network. With the use of powerfulsearch engines and the Web with the hyper-links, theInternet is acting as a gigantic repository of informa-tion transparently accessed by the user via point-and-click in the Web browser.

12 EpilogWhy was it so difficult to create interest among Nor-wegian (and European) computer scientists for thisnew ARPANET/Internet technology? And did Norwaybenefit from its participation?

In December of 1973, half a year after NORSAR-TIPwas installed, a Norwegian ARPANET committeewas established to promote and coordinate possible

Norwegian participations in ARPA activities. It con-sisted of members from the Research Council of Nor-way, NORSAR, RBK, NTH (later NTNU), and NTA-R&D. A condition for connecting to NORSAR-TIP,or making use of its terminal service, was that thisshould contribute to the furthering of the technologyand be beneficial to both ARPA and NDRE. Thecommittee encountered two main problems. One wasthe uncertainty about the future of ARPANET. LarryRoberts had left the IPTO office in 1973, and Dr.J.C.R. Licklider took over as director. He neededtime to be informed and make a decision regardingthe future of ARPANET. Hence it took a whilebefore the future was known. In July 1975 the conti-nental part of ARPANET was transferred to theDefense Communications Agency (DCA), whileIPTO continued to be responsible for the internationalengagements. The other problem for the committeewas the various upcoming European competing com-munication initiatives, similar to what Donald Daviesexperienced in England. The committee did not comeup with any constructive proposals, and dissolveditself in 1975.

This is just one example of the difficulty in creatingresearch interests for the internet technology. Formany years the Internet and its concepts wereneglected by the telecom operators and the Europeanresearch community. Due to the tremendous popular-ity, growth and global coverage of the Internet, thetraditional telecom operators had more or less unwill-ingly been forced to accept reality and offer Internetaccess. We will attempt to shed light on some of thefactors contributing to this effect.

Competitions between alternatives

Simultaneously with the growth of the ARPANET inthe 1970s, we saw the emergence of other similarcompeting communication concepts, like CYCLADE[37] in France presented by Pouzin in 1973, the Euro-pean Informatics Network (EIN) [38] presented in1976, and the CCITT’s Orange Books [39] contain-ing the X.25 standards published 1977. In 1983 theInternational Standards activities presented the “Ref-erence Model for Open Systems” [40] and then insuccession a set of layered communication protocols.The dominating feature of X.25, and the ISO stan-dards in general, was the virtual circuit principle, incontrast to the flexible packet and datagram modesof operation in the ARPANET and later the Internet.A virtual circuit in the packet switched arena is theequivalent to end-to-end circuit established in thetelephone network. The dominant part of the Norwe-gian research community, including NTA-R&D wasfor a long time convinced that packet communica-tions had to be based on virtual circuits.

ISSN 0085-7130 © Telenor ASA 2004

Page 132: 100th Anniversary Issue: Perspectives in telecommunications

130 Telektronikk 3.2004

The TCP/IP family of protocols was

expected to be replaced by international

standards

The management at NDRE and NTA-R&D, and theNorwegian communication research community atlarge did not believe in the Internet technology beforethe end of the 80s and the beginning of the 90s. Ingeneral most communication experts believed thatthe TCP/IP suite of protocols eventually would bereplaced by internationally agreed standards. So whenwe attempted to create interests for participation inthe further development of this technology, theresponses were negative.

Skepticism to defense matters

At the time Norway entered the development activi-ties, there was a general antipathy against everythingconnected with defense matters, and especially withUS defense. This antipathy was of course not reducedby the fact that participation in the activities andusage of the communications network had to beapproved by ARPA.

International standards development

The national authorities and the academic communi-ties believed strongly in international standards. Itwas relatively easy to obtain funding for participationin standards activities. This was less committing/demanding than the hard practical work that went onin the ARPA community. Standards were worked outon paper within a study period of four years, andwhen ready accepted more or less without practicalexperience. Later, when standards were to be imple-mented and tested out, deficiencies were surelydetected and the standards had to be revised, and thenre-proposed as standards in the next study period. Incontrast the ARPA research went via implementa-tions, testing, modifications, more testing and refine-ments, and when mature enough and sufficiently sta-ble, finally adopted as standard. This included also aset of support functions like “name to address map-ping”, “service type to service-access-point map-ping”, and “IP address to MAC address mapping”, tomake the combined protocol layers work efficientlyand user friendly.

When the ISO standards came out in the middle andlatter half of the 1980s, after a substantial work effort,a set of standards had been defined for each layer inthe reference model. These standards included manyoptions. Before the standards could be implemented,one had to make use of workshops to prepare andagree on the options to use in practice.

This took quite a while. It is worth mentioning thatthe options agreed upon made the ISO standards, for

all practical purposes, functionally equal to the Inter-net protocols.

Agreed international standards were not openly avail-able. They had to be purchased for a certain fee. Incontrast, all Internet protocol specifications andrelated documentations were freely available.

The American dominance

A very important contribution to the usability andpopularity of the TCP/IP suite of protocols was therefinements of the UNIX operating system done atBerkeley. This work included the integration of theTCP/IP suite of protocols into the UNIX system. Thiswork was very successful, and has been an examplefor very many other implementations of the protocolsuite under other operating systems. The Berkeleyversion of UNIX was made available to universitiesfree of charge and, for a fee, to AT&T and approvalfrom ARPA, to non-educational research organiza-tion, not only in the US but also in Europe. Unix wasfor years the dominating system used in higher edu-cation and in research. Hence a whole new generationof higher educated people were exposed to and influ-enced by Unix and internet technology and services.

The development of computers and their softwarewas dominated by American companies. The TCP/IPsuite of protocols was part of the software systemsdelivered with the computers. There was no incentiveby the software companies to spend money on imple-menting other standards, unless someone paid for it.So this was also a major factor gradually making theTCP/IP suite of protocols a de facto communicationsstandard.

Did Norway benefit from the Internet

cooperation with the ARPA community?

The opportunities provided to Spilling when hemoved over to NTA-R&D in mid 1982 enabled him(1984/85) to establish a small Norwegian Internet,interconnecting informatics departments at the uni-versities in Oslo, Bergen and Trondheim, and NTA-R&D. The network at NTA-R&D was intercon-nected, via the Nordic satellite ground station atTanum and SATNET, with ARPANET in the US.This implied that influential academic research peo-ple could make use of Internet services and commu-nicate with colleagues in the US, and experiencedthat this form of communications worked well andprovided a set of reliable, effective, and attractiveservices.

A few years later (1987/88) the academic communi-cations network UNINETT was established, intercon-necting, in the first instant, informatics departments atthe main Norwegian universities. As mentioned pre-

ISSN 0085-7130 © Telenor ASA 2004

Page 133: 100th Anniversary Issue: Perspectives in telecommunications

131Telektronikk 3.2004

viously, there was a debate in the UNINETT commu-nity regarding transport network technology – X.25,DECNET, and IP. But gradually the experience withand the increasing desire to use Internet servicespaved the way for UNINETT to provide this kind ofcommunications. In the beginning as some sort ofhybrid network, but later converted to a pure IP-basednetwork when the situation had ripened sufficiently.UNINETT gradually evolved to encompass all highereducational institutions in Norway.

The knowledge and experiences gained in participat-ing in the ARPA projects led to the establishment ofa computer communications research group and anearly curriculum in computer communications at theDepartment of informatics at the University of Oslo.This effort were initiated by Yngvar Lundh and PaalSpilling, and gradually led to the establishment ofsimilar activities at all universities in Norway.

In the late 80s, all Nordic countries had installed aca-demic Internets. The operating organizations com-bined forces and created NORDUnet. It intercon-nected the academic networks in Norway, Sweden,Finland, and Denmark, with the main hub in Stock-holm. From there leased lines were installed to CERNand CWI in the Netherlands and later directly to theUS.

The Nordic countries have the highest percentage ofInternet users in the world. This is in part due to theearly exposure to this technology, first in the aca-demic world and later in the public sector. NTA(Telenor) was among the first telecom operators,around 1994/95, in Europe to be convinced to offerInternet access to customers. This is certainly due tothe close exposure to this technology over a longperiod of time at NTA-R&D.

AcknowledgementsThe authors are very grateful to Bob Kahn and PeterKirstein for providing valuable comments on variousparts of this article.

References 1 Kleinrock, L. Information Flow in Large Commu-

nication Nets. RLE Quarterly Report, July 1961.

2 Kleinrock, L. Communication Nets. NJ, USA,McGraw-Hill, 1964.

3 Baran, P. On Distributed Communication Net-works. IEEE Trans. On Communication Systems,CS-12, March 1964.

4 Baran, P et al. On Distributed Communicationnetworks. RAND Corp., 1960–1962. (Series of 11internal reports.) URL: http://www.rand.org/MR/baran.list.html

5 Davies, D W et al. A digital communication net-work for computers giving rapid response atremote terminals. ACM Gatlinburg Conf., Gatlin-burg, TN, USA, 2.1–2.17, Oct. 1967.

6 Davies, D W. Communication networks to serverapid response computers. IFIP Congress, TheHague, Netherlands, August 1968.

7 Licklider, J C R. Man-Computer Symbiosis. In:IRE Transactions on Human Factors in Electron-ics, HFE-1, March, 1960. URL:http://memex.org/licklider.pdf

8 Licklider, J C R, Clark, W. On-Line Man Com-puter Communication. AFIPS Conference Pro-ceedings, 21, 113–128. New York, SpartanBooks, 1962.

9 Marill, T, Roberts, L. Towards a CooperativeNetwork of Time-shared Computers. AFIPS FallConf., 29, 425–432, Oct. 1966. New York, Spar-tan Books, 1966.

10 Roberts, L. Multiple Computer Networks andIntercomputer Communication. ACM GatlinburgConf., Gatlinburg, TN, USA, Oct. 1967. (OriginalARPANET Design Paper)

11 Engelbart, D C, English, W K. A Research Centerfor augmenting human intellect. AFIPS Confer-ence Proceedings of the 1968 Fall Joint Com-puter Conference, San Francisco, CA, Dec 1968.

12 Carr, C S, Crocker, S, Cerf, V. HOST-HOSTCommunication Protocol in the ARPA Network.AFIPS Proc. of SJCC, 36, 589–597. Montvale,NJ, AFIPS Press, 1970.

13 Tomlinson, R. The first network email. October21, 2004 [online] – URL: http://openmap.bbn.com/~tomlinso/ray/firstemailframe.html

14 Roberts, L G, Wessler, B D. Computer networkdevelopment to achieve resource sharing. AFIPSConf. Proc., 36, 543–549. Montvale, NJ, AFIPSPress, 1970.

15 Kahn, R E. The Organization of ComputerResources into a Packet Radio Network. IEEETransaction on Communications, COM-25,169–178, January 1977.

ISSN 0085-7130 © Telenor ASA 2004

Page 134: 100th Anniversary Issue: Perspectives in telecommunications

132 Telektronikk 3.2004

16 Abramson, N. The Throughput of Packet Broad-casting Channels. IEEE Trans. On Comm., COM-25 (10), 1977.

17 Kahn, R E. The Introduction of Packet SatelliteCommunications. IEEE National Telecommuni-cations Conference, Washington, DC, 45.1,December 1979.

18 Private correspondence with Professor PeterKirstein, June 2004.

19 First ARPANET public demonstration, organizedby Robert Kahn of BBN, at the ICCC Conf inOct, 1972.

20 Lundh, Y. Computers and Communication –Early Development of Computing and Internettechnology – a Groundbreaking Part of TechnicalHistory. Telektronikk, 97 (2/3), 3–19, 2001.

21 Cerf, V, Kahn, R. A Protocol for Packet NetworkInterconnection. IEEE Trans. Comm. Tech.,647–648, May 1974.

22 Cerf, V G, Postel, J, Cohen, D. IP – TCP split in1978. Private communication.

23 Retz, D. Structure of the ELF Operating System.Paper presented at the National Computer Confer-ence, AFIPS, 1974.

24 BBN. Specifications for the Interconnection of aHost and an IMP. Cambridge, MA, USA, Bolt,Beranek and Newman, May 1978. (Report N1822)

25 Spilling, P, Cerf, V G, Kirstein, P. A Study of theTransmission Control Program, a Novel Programfor Internetwork Computer Communications.Nato Grant, 1149, 1976.

26 Jacobs, I M, Binder, R, Hoversten, E V. GeneralPurpose Packet Satellite Network. Proc. IEEESpec. Issue on Packet Communications Network,66, 1448–1467, 1978.

27 Spilling, P, Lundh, Y, Aagesen, F A. FinalReport on the Packet Satellite Program. Kjeller,NDRE, 1978. (Internal Report E-290, NDRE-E.)

28 Chu, WW et al. Experimental Results on thePacket Satellite Network. NTC-79, WashingtonDC, November 1979.

29 Spilling, P, McElwain, C, Forgie, J W. PacketSpeech and Network Performance. Kjeller,NDRE, 1979. (Internal Report E-295, NDRE-E.)

30 Spilling, P. Low-Cost Packet Radio ProtocolArchitecture and Functions – PreliminaryRequirements. Menlo Park, CA, SRI Interna-tional, October 1980. (Technical Note 1080-150-1.)

31 Spilling, P, Craighill, E. Digital Voice Communi-cations in the Packet Radio Network. ICC-80,Seattle, WA, June 1980. Also available: SRIInternational, 1980. (Packet Radio Technical Note286.)

32 Spilling, P, Tobagi, F. Activity Signaling andImproved Acknowledgements in Packet RadioSystems. Menlo Park, CA, SRI International,1980. (Packet Radio Technical Note 283.)

33 Mills, D L. The Fuzzball. Proc. ACM SIGCOMM88 Symposium, Palo Alto, CA, August 1988,115–122.

34 Metcalfe, R M, Boggs, D R. Ethernet: DistributedPacket Switching for Local Computer Networks.Comm. ACM, 395–404, 1976.

35 IEEE. Standard for Information technology –Telecommunications and information exchangebetween systems – Local and metropolitan areanetworks – Specific requirements, Part 3. NewYork, IEEE Press, 1985. (ANSI/IEEE Std 802.3-1985.)

36 Berkeley UNIX 4.2 bsd, a complete rewrite of theoriginal AT&T UNIX version.

37 Pouzin, L. Presentation and major design aspectsof the Cyclades computer network. 3rd IEE/ACMData Comm. Symp., St Petersburg, Nov. 1973.

38 Porcet, F, Repton, C S. The EIN communicationsubnetwork principles and practice. Proc. ICCC,Toronto, 523, Aug 3–6, 1976.

39 Series X recommendations. The Orange Book,ITU, Geneva, 1977.

40 ISO. Basic Reference Model. Geneva, ISO, 1983.(ISO 7498-1.)

ISSN 0085-7130 © Telenor ASA 2004

Page 135: 100th Anniversary Issue: Perspectives in telecommunications

133Telektronikk 3.2004

Paal Spilling obtained his cand.real. degree in physics from the University of Oslo in 1963. In January 1964

he started on a PhD program at the University of Utrecht, The Netherlands, and finished his degree in sum-

mer 1968 in experimental low-energy neutron physics. Subsequently he joined the nuclear physics group at

the Technical University in Eindhoven, The Netherlands. In January 1972 Spilling started to work for the

Norwegian Defence Research Establishment (NDRE) at Kjeller, and from end 1974 he got involved with the

ARPA-funded research program in packet switching and internet technology under the leadership of Yngvar

Lundh. He was in 1979/80 visiting scientist with SRI International in Menlo Park, California, where he worked

on the Packet Radio program. Spilling left NDRE in August 1982 to join the Research Department of the

Norwegian Telecommunications Administration (NTA-RD), also at Kjeller. Subsequently he established the

first internet outside USA, interconnected with the US internet. At NTA-RD Spilling worked with communica-

tions security and the combination of internet technology and fiber-optic transmission network. In 1993

Spilling became professor at Department of informatics, University of Oslo and University Graduate Center

at Kjeller.

email: [email protected]

Yngvar Lundh is Electrical Engineer from the Norwegian Institute of Technology, 1956. He worked as

research engineer at Norwegian Defence Research Establishment until 1984, then as engineer in chief at

NTA/Telenor. From 1996 he has been running his own consulting company. He was guest researcher at MIT

1958–59, with Bell Labs 1970–71, and part time professor of computer science at the University of Oslo

from 1980. Lundh concentrated on development of new technologies in electronics, computing, automation

and telecom. He initiated and headed research groups for that and was dedicated to the possibilities for new

industry in Norway. Computer production (Norsk Data) and digital switching (“Node technique” STK/

Alcatel), electronic mail in Norway and other enterprises were among the results.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 136: 100th Anniversary Issue: Perspectives in telecommunications

134 Telektronikk 3.2004

IntroductionWith all due respect to coal mining, tourism and allother activities at Svalbard, it was Svalbard’s role inspace activities that got Svalbard the fibre.

It is a simple geographical fact, supported by an air-port and an enjoyable community with a mild climatethat makes SvalSat the best location in the world forsupporting polar orbiting satellites. SvalSat can seeall the orbits.

The short version of why Svalbard got the fibre isthat the Norwegian Space Centre (NSC) felt thatSvalSat’s future was threatened when NorthrupGrumman/Raytheon for NPOESS did not selectSvalbard for its global network of 15 ground stations.They instead selected Helsinki because Helsinki wasconnected to the global fibre network. To put itmildly, we were not happy.

We were fortunate to have very high-level agree-ments between The United States of America andthe Kingdom of Norway for collaboration on thepeaceful use of outer space. Under this agree-ment we had the practical cooperationagreements between NSC and NASAwhich was the basis forSvalSat’s devel-

opment, and 2002 saw the signing of the first agree-ment between ourselves and IPO/NOAA for Sval-Sat’s role in the future next generation US polarorbiting weather satellites, the NPP and NPOESS.

Why and how Svalbard got the fibre1)

R O L F S K Å R

Rolf Skår is

Director General

of the Norwegian

Space Centre

The Norwegian ground station for polar orbiting satellites at Svalbard, SvalSat, has the best location

in the world for serving owners of such satellites. Satellite communication was a prerequisite for

establishing the station in 1997. However, in 2002, the future of SvalSat was threatened due to

the fact that it was not connected to the global fibre network. In record-breaking short time, and in

cooperation with Telenor, the Norwegian Space Centre managed to finance and finalize the project

connecting Svalbard to the mainland by fibre-optic cable for the benefit of SvalSat, scientific

institutions, and the society at large in Longyearbyen.

1) This article is based on a speech given by Rolf Skår at the inauguration ceremony of the fibre cable in Longyearbyen on January 31,2003.

Abbreviations used in the text

NSC Norwegian Space Centre

NPP NPOESS Preparatory Project

NPOESS National Polar-orbiting Operational

Environmental Satellite System

NASA National Aeronautics and Space

Administration

IPO Integrated Programme Office

EUMETSAT European Organisastion for the Exploitation

of Meteorological Satellites

ESA European Space Agency

UNIS University Studies at Svalbard

Until 2004 the data from polar orbiting satellites were acquired by SvalSat, then sent via geostationarysatellites directly from Svalbard to SvalSat customers on the East Coast of the US

ISSN 0085-7130 © Telenor ASA 2004

Page 137: 100th Anniversary Issue: Perspectives in telecommunications

135Telektronikk 3.2004

Even more importantly, over many years the tradi-tional good relations and friendship between USAand Norway had developed into a friendship andmutual trust between individuals who were in aposition to shape the future.

I have been criticized for working against the inter-ests of space, or specifically against satellite commu-nication, by promoting fibre instead of communica-tion via geostationary satellites. Let me simply saythat it was because of satellite communication thatSvalSat was developed; it was very reliable, we neverlost a pass due to satellite communication problems,and it has served the Svalbard community very wellsince 1979.

Communication costs were highHowever, SvalSat’s future was in need of a morecompetitive solution, and I would simply like every-body to hear me say that I will promote and fight forspace based solutions when they provide the bestsolution for the users, and only then.

The story of fibre to Svalbard began by both Telenorand NSC doing feasibility studies in early 2002. Bothstudies had the same conclusion: A sub sea fibre wasfeasible; the recommended route was from Tromsøvia Bear Island to Spitzbergen.

Both studies concluded that a system would costaround 50 mill USD or 400–500 mill NOK for onecable with satellite communication as backup. Thisled to the formal process of Telenor Svalbard andNSC informing all parties with relevant space inter-ests of this possibility and inviting them to a stake-holders’ meeting in Longyearbyen 24–25 July 2002.Here the Telenor Group met with decision makersfrom NASA and IPO, plus EUMETSAT and ESA.

As a result of this meeting we had several telephoneconferences with the stakeholders and another meet-ing was planned for November 1 at IPO in Washing-ton DC.

Then the project collapsed, first NPOESS selectedHelsinki and then the Telenor Group withdrew.

Was the game over?No, this is when the real game started, and this is themore interesting story of how Svalbard not only gotthe fibre, but how it got 32 fibres in record time.

We went to America and during the three days fromOctober 30 to November 1, we laid the groundwork.Based upon the interest from the feasibility studies

and stakeholders’ meetings, we were approached bya very hungry telecom fibre sub sea cable industry.

On October 30 we visited one such company – it wasnot the one finally chosen – who convinced us thatthey could do a turnkey installation of one cable fromTromsø to Longyearbyen for 40 mill USD. Someother suppliers even indicated much lower prices,below 30 mill USD.

If there is one date to mark how Svalbard got thefibre it is October 31, 2002. At 10 a.m. we met withNASA’s Bob Spearing at NASA Headquarters. Hewas the architect of the implemented solution. Bobtold me that NASA could not invest in a fibre toSvalbard; however they could pay for a data trans-mission service from Svalbard to the US. Then I gaveBob an offer NASA could not refuse, and it trulyreflects the mutual trust, the friendship and the com-mon interests which had developed since 1995 whenSvalSat was planned as a joint effort between NASAand NSC. I offered NASA that they should pay itscurrent cost for satellite communication from SvalSatto the US, around 6 mill USD per year, for a fewyears until 20 mill USD in Net Present Value hadbeen accumulated, and we would provide a 25 yeartransmission service with six times the current band-width. Among friends and partners you can be gener-ous, therefore I offered 25 years – not 10 or 15, andsix times the bandwidth – not double.

When the meeting was over, Bob arranged for me tomeet Sean O’Keefe, the NASA administrator, andwe talked about Svalbard and about Norway, and Iinvited him to come to Norway.

The same afternoon I met with John D. Cunninghamfrom IPO, we met alone, and that was when Svalbardgot the fibres. John was enthusiastic and eager toreplace Helsinki with Svalbard. He promised to workwith NASA and provide another 20 mill USD paidover a few more years than NASA because theirbandwidth requirements were lower, but the value ofSvalbard compared to Helsinki was worth the 20 millUSD he agreed to pay.

On November 1, the stakeholders’ meeting did notdevelop as planned. First Telenor (the satcom people)announced that Telenor did not wish to invest in aSvalbard cable because of its negative economicimpact for the Telenor Group.

Back in businessThen I announced that NSC would like to invest ina cable based upon an understanding between NASA,IPO and ourselves. We had some indications that

ISSN 0085-7130 © Telenor ASA 2004

Page 138: 100th Anniversary Issue: Perspectives in telecommunications

136 Telektronikk 3.2004

there was indeed a buyer’s market for sub-sea fibrecables, which we wished to benefit from.

We then started to rush the project and went into ahigher gear. On November 12, the Board of NSCgave me the go-ahead based on the NASA and IPO –NSC understanding. On November 15, I met with JanEdvard Thygesen, Senior Vice President of Telenorand responsible for all Telenor networks.

A very important ‘yes’ to the project was his answer.And more than that, he was enthusiastic and promisedfull support from Telenor Networks and TelenorSvalbard. On November 18, the main terms ofTelenor’s role were agreed to.

Cable at low priceWe rushed out the Invitation To Tender (ITT) onDecember 21. It had been prepared by us and a largegroup of Telenor experts and was for a turnkey end-to-end transmission system from Tromsø to Long-yearbyen, with 40 mill USD available and vendor tochoose route and technology, and an option was leftwhether 40 mill USD was sufficient for two indepen-dent cables forming a ring.

The vendors were asked to do the so-called Desk-Top-Study, select the route and then do the detailedsurvey of the selected route. The ITT deadline wasFebruary 3. Early January we discovered that to gofrom Tromsø to Longyearbyen was too risky due tothe lack of protection from trawlers.

Another rush job: we extended the tender due date toFebruary 25, and we did the Desk Top Survey withgood help from the Harstad company Seaworkstogether with Telenor. We also decided to do thedetailed survey for the new route going from Harstadover Andøya to Longyearbyen. The area outsideAndøya is the only trawler free zone between Kirke-nes and Trondheim and therefore the ideal place toget from land into deep water above 1600 m waterdepth.

When we opened the tenders on February 25 we hada huge surprise. We could get two completely inde-pendent cables forming a ring for around 40 mill USD.

However, there was also a major disappointment:None of the companies accepted payment fromNASA and IPO over seven years. So we decidedto recompete with only the four vendors that webelieved could do a turnkey solution, preferably aring, for 40 mill USD.

The recompetition resulted in a clear winner, Tyco,and on March 7, we announced to all bidders thatTyco had been selected for contract negotiation, andwe asked Tyco for their Best and Final Offer for aring with two independent cables.

During contract negotiations early April we had some25 telecom experts, including seven lawyers on ourside, we worked very long hours to complete the verydetailed turnkey contract to mirror our ITT andTyco’s Best and Final Offer.

We signed the contract on April 14; however, itwould only come into force when financing was inplace and we had all the necessary permissions.

We committed 300,000 USD to Tyco’s early plan-ning and ordered the detailed Survey, without whichwe could not get the permissions, and together withTyco we set ourselves a deadline on May 15 to getfinancing in place and another deadline on July 1 toget all the permissions.

Financing was to become more difficult than antici-pated and the week starting with my birthday, May13, proved to be very dramatic. Tyco had proposeda specialist US financing company, Hannon Arm-strong, to arrange the financing both for the construc-tion period and also for them to purchase the yearlyrevenue stream from NASA and IPO as payment fortheir transmission service.

One challenge was that NRSE (Norsk RomsenterEiendom), the legal customer, a 100 % owned sub-sidiary of Norwegian Space Centre, a foundation, didnot have any meaningful equity, nor any cash leftafter paying for the Detailed Survey.

I used all my tricks; I invited the decision makers forthe very best of the ‘Huset’ dinners; Huset being thefamous restaurant at Longyearbyen. I invited them onthe best of snowscooter safaris to Barentsburg and toIsfjord Radio. I did not succeed.

The most dramatic weekWe left Svalbard on May 15 without an agreement,and Stan Kramer and myself signed an extension ofthe deadline until May 19, which we also missed.

Hannon Armstrong wanted a government guarantee.I told them that only the National Assembly couldauthorize such. Finally, by involving the Director ofPublic Prosecutions2) and the Minister for Trade andIndustry, a compromise was accepted. Mr. Ansgar

2) In Norwegian: Riksadvokaten

ISSN 0085-7130 © Telenor ASA 2004

Page 139: 100th Anniversary Issue: Perspectives in telecommunications

137Telektronikk 3.2004

Gabrielsen wrote a letter to Hannon Armstrongwhose text they had agreed to beforehand. From thenon the project was unstoppable. We only needed thefinal permissions.

Did we take too high risks? I believe I understood therisks involved, and certainly the rewards. The offerfrom Tyco depended upon doing the project during2003. It was probably our only chance to get twocables instead of one. We had invested around 10mill NOK; the majority of this was the Desk-Top-Study and the order for the Detailed Survey whichstarted May 19.

I was so convinced that we would get the necessarypermits that Tyco also believed this to be the case,even more so when on May 21 we committed 2 millUSD in a more serious downpayment to Tyco. Soafter May 21, the serious work started.

To give you a flair for Norwegian bureaucracy, I willgive you two real examples of how fast it can be: AtAndøya we needed a building of some 140 m2 tohouse the power-feed equipment, a total investmentof around 5 mill NOK. We contacted the owner of theland on May 7, on May 13 we agreed on the price andthe same day we sent an application to the localauthorities for permission to use farmland for ourpurpose and for building permits. We got all theapprovals needed by May 22, construction work

started the next day and the building was ready toreceive its first cable on July 25.

On June 17, Sean O’Keefe took Helle Hammer, theState Secretary of the Ministry of Trade and Industry,together with a few people from NSC on his NASAplane to Svalbard, accepting our invitation to visitNorway and Svalbard. During the visit I learned thatthe Detailed Survey necessary for our application forthe main Government permit was ready. I signed theapplication for permit and personally handed it in tothe Governor’s office. Four days later we had the crit-ical permissions and the following week we returnedto Svalbard to celebrate and sign the final contractdocuments. The trust that Tyco, Telenor and our-selves had put into the project was truly rewarded.

The project was well under way to fully benefit fromthe Arctic summer with midnight sun and continualdaylight. Look at these milestones: The completewet plant, 2,700 km of cables and 40 repeaters wereploughed and laid at the bottom of the sea from July21 until August 15; 25 days using two of the world’smost advanced specialist ships and setting a worldrecord for the deepest water depth for ploughing,1671 meters.

The good planning was rewardedThe project team that completed this project in recordtime did a fantastic job. Torbjørn Dyb from Telenor

Representatives from IPO, NASA and Raytheon (antenna provider) are leaving the foundation of the IPOantenna at SvalSat on June 24, 2003. They are clearly satisfied with what they have seen

ISSN 0085-7130 © Telenor ASA 2004

Page 140: 100th Anniversary Issue: Perspectives in telecommunications

138 Telektronikk 3.2004

was our Project Manager leading a team of Telenorengineers, and from Tyco were Debbi Brask andDave Willoughby.

There were no surprises, no cost overruns. On April14, we signed the conditions of turnkey contract withTyco. The agreed price was the final price, not a centin additional charges. And we got a little more thanwe asked for.

Our plan was one cable with two fibre pairs, fourfibres, and 10 Gbit/s. Tyco offered one cable witheight pairs and one cable with two pairs in contractnegotiations, and with some gentle arm twisting andsome goodwill what is now installed are two fullyindependent cables, each with eight pairs fullyrepeatered with 40 repeaters, and with double initialcapacity, 20 Gbit/s on each cable. The system may beexpanded to 2500 Gbit/s on each cable. We believethis to be sufficient for any foreseen or even unfore-seen future need.

I am convinced that it is the local community thatwill benefit most from having two cables instead ofone with satellite communication as back-up. Satellitecommunication would be a very expensive solutionto be paid for by Telenor Svalbard and thereby by itscustomers.

It was Telenor Svalbard through their agreement tocover the costs exceeding 40 mill USD that made itpossible to get two cables. These additional costs willhave to be paid by the local users, and we haveagreed that this will be done over the first six years,so after 2009 there should be a significant decrease inthe cost of using the fibre cables.

The final metres of the fibre are put in place by thecable ship on the sea floor reaching Spitzbergen inAugust 2003

Two independent cables, each with very large capacity, now connect Svalbard with mainland Norway andthe international fibre network

ISSN 0085-7130 © Telenor ASA 2004

Page 141: 100th Anniversary Issue: Perspectives in telecommunications

139Telektronikk 3.2004

I am pleased that we have been able to find a newpricing model for selling really broad broadband.There is 20 Gbit/s in each cable, so by using the pric-ing model in mainland Norway, almost all this capac-ity would be unused. This new model will benefit thescience community most. They will have a 1 GbitEthernet connection to the science community in therest of the world, and Svalbard will have the bestcommunication system of any higher learning orresearch establishment in Norway. I promised this asour gift to UNIS on its 10 year anniversary.

What we are really looking forward to at the Norwe-gian Space Centre is for all the data that will now bedownloaded from Earth science and weather satellitesat SvalSat to actually be used by the scientists atUNIS and in Norway.

There is a unique opportunity that with unlimitedbandwidth, raw data at full sensor fidelity may bedownloaded from the sensor with 400 Mbit/s ormore, and through the fibre network made availableto scientists.

Rolf Skår (63) is Director General at the Norwegian Space Centre (NSC) in Oslo. He holds an MSc in Cyber-

netics from the Norwegian University of Science and Technology (NTNU) in Trondheim from 1966. He has

been engaged at the Norwegian Defence Research Establishment (NDRE) at Kjeller, and in 1967 he was one

of the co-founders of the Norwegian Computer Company Norsk Data A.S in which he held several manage-

ment positions and CEO from 1978 to 1989. From 1990 to 1992 he was General Director of the Norwegian

Council for Scientific and Industrial Research (NTNF) after which he entered the position as President of the

consulting company Norconsult International. From 1994, he has been engaged in space research. He is the

Norwegian delegate to the European Space Agency (ESA) council and from 1998 he holds the position as

Director General of NSC.

Rolf Skår has extensive experience in management of large and complex information technology projects,

as well as from international sales and marketing management, including sales of large information tech-

nology systems. He is an active participant in various government policy studies, in particular related to

space, science and technology policy.

ISSN 0085-7130 © Telenor ASA 2004

Page 142: 100th Anniversary Issue: Perspectives in telecommunications

140 Telektronikk 3.2004

IntroductionThis article provides a technical overview of the fibreoptic cable and the necessary components to make upa complete transmission system. The items coveredare the cable technology itself, the repeater systemsincluding the power feeding, as well as the monitor-ing and maintenance systems.

The cable system

Overview

The fibre optic cable system goes between Harstadand Longyearbyen via Andøya. The US CompanyTyco has delivered the system. Tyco has its roots inthe well-known corporation Bell. There are two sepa-rate cables for redundancy; only one is carrying traf-

Technical solution and implementation of theSvalbard fibre cable

E I R I K G J E S T E L A N D

Eirik Gjesteland

is an Engineer at

the Operational

Centre of Telenor

Networks

Longyearbyen in Svalbard at 78° North is an ideal location to support polar orbiting satellites. All 14

daily passes of a typical sun synchronous orbit are visible with satellite contact every 7 and 17 minutes.

SvalSat was developed in the period 1996–1999 as a joint effort between NASA and the Norwegian

Space Centre. It has been supporting NASA earth observation satellites through an Intelsat satellite

at 1° W. The available transmission rate is around 55 Mb/s and in addition there are some ISDN lines

and one 2 Mb/s line. SvalSat has now grown to be the world’s largest polar ground station to serve

earth orbiting and polar weather satellites.

A fibre optic telecommunications cable between the Norwegian mainland and Svalbard now replaces

the Intelsat connection. It was deployed in 2003, and put into operation in January 2004. The article

provides an overview of the technology used together with explanations of the different building

blocks of the complete transmission system. The cable and additional equipment has an expected

lifetime of 25 years and the current capacity utilisation is 10 Gb/s. Telenor Networks is responsible

for the technical implementation of the project, as well as the operation and maintenance.

Figure 1 The SvalSat Earth station is placed at Platåberget, outside Longyearbyen, Svalbard

ISSN 0085-7130 © Telenor ASA 2004

Page 143: 100th Anniversary Issue: Perspectives in telecommunications

141Telektronikk 3.2004

fic at a time. Each cable has 20 optical repeaters,spaced approximately 67 km apart, to keep the signalpower up on the more than 1300 kilometre travelbetween Andøya and Svalbard. The repeaters areactually amplifiers, which do not regenerate the sig-nal but amplify it. DC power to the repeaters is fedfrom Andøya. Each repeater has eight amplifier pairs(one per fibre pair) and a ‘high loss loop back’ formonitoring.

The two separate cables are called Segment 1 andSegment 2. Segment 1 is 1375 km with 172 kmburied at an average depth of 2 m below the oceanfloor to keep the cable safe from external hazards.Segment 2 is 1339 km and is buried for 173 km at thesame average depth as Segment 1. Between Breivikaand Harstad, the cables are called Segment 1A and2A, both sea and land cable is used. Segment 1A is61 km and Segment 2A is 74 km long. These sectionsare so short that optical repeaters are not needed.

Figure 3 The two separate cables; Segment 1 and Segment 2, are 1375and 1339 km, respectively. They are partly buried under the sea floor.The cables between Breivika and Harstad (1A and 2A) are both sub seaand land cables with a length of 61 and 74 km, respectively

Figure 4 The fibre cable segments contains eight fibre pairs, of whichonly one is currently used

Figure 2 Two separate fibre optic cables are laidbetween Harstad on the Norwegian mainland andLongyearbyen on Svalbard. It goes via Breivika onthe island of Andøya, before leaving the NorthNorwegian coast

Harstad

Breivika

Longyearbyen (Svalbard)Longyearbyen,

Svalbard

Andøya,

Breivika. The 10

Gb/s signal from

Harstad is fed into

the fibre pairs in

the wet plant. DC

power is also fed

from Breivika.

The wet plant. The sub sea cable with 20

repeaters, one approx. every 67 km.

Segment 1: 1375 km

Segment 2: 1339 km

Segment 1A: 61 km

Segment 2A: 74 km

Harstad

STM-1 –10

Gb/s

Harstad

Segment 2

Sea cable

Segment 1

FP 1

FP 2-8 0λ

FP 2-8

FP 1

Segment 1A

0λ2λ

Segment 2A

Sea and land

Svalbard Breivika

Svalbard Harstad

Segment 2+2A, FP1

Segment 1+1A. FP1S

8 STM-1

(1+1)

P

S

P

8 STM-1

(1+1)

S = Service

P = Protection

FP = Fibre pairs

System configuration

Of the eight available fibre pairs in each cable seg-ment, only one is currently used. To use the remain-ing pairs, additional equipment must be installed inSvalbard and Harstad. One fibre pair has more thanenough capacity for today’s traffic load. If Segment 1should fail, the traffic will automatically be switchedto Segment 2.

The fibre optic cable

Different types of cable are used on the way to Sval-bard. The type of cable used depends on the risk ofdamage. Close to shore where the risk of damagefrom fishing equipment and ships’ anchors is signifi-cant, the cable has a thicker armour to protect the

ISSN 0085-7130 © Telenor ASA 2004

Page 144: 100th Anniversary Issue: Perspectives in telecommunications

142 Telektronikk 3.2004

fragile fibre pairs. The cables are also buried in theocean floor close to shore where the water is rela-tively shallow, and a special plough is used to do so.Two cable ships were used to lay the cables becauseone ship alone could not hold the amount of cablenecessary to cover the distance. One ship started fromLongyearbyen and one from Breivika, and they metmidway to splice the two ends. The two ships thenreturned to lay the second cable, see Figure 5.

The different cable types used are from the SL 21family shown in Table 1. The SL 21 provides the

optical path for the undersea digital transmission,supervision signals and a power path for the opticalamplifiers.

It is not only the armour that varies, but also the typeof optical fibre. In this system three different fibretypes are used:

• Large mode field fibre (LMF)• High dispersion shifted fibre (HDF)• Non dispersion shifted fibre (NDSF)

The different fibre types affect the signal in differentways, so to minimise distortion the design includesdifferent fibre with different characteristics.

The optical repeaterThere are 20 optical repeaters on each segment, 40 intotal. There is a repeater approximately every 67kilometres. The repeaters have eight optical amplifierpairs each, one per fibre pair, and operate on DCpower fed from Breivika. The system is what we calla ‘single end feed system’. Each repeater also con-tains a ‘high loss loop back’ (HLLB) used for moni-

Figure 7 The cable ship ‘Cable Innovator’ coveredthe distance from Breivika to the point where it metthe other cable ship, ‘Maersk Recorder’, to splice thetwo ends

Figure 8 The cable ship ‘Maersk Recorder’ coveredthe distance from Longyearbyen to the point where itmet ‘Cable Innovator’, to splice the two ends

Figure 6 Example of the type of plough used to bury the cable

Hotellneset

Svalbard

S1 landing: 21 July 03

S2 landing: 1 August 03

S1 landing: 26 July 03

S2 landing: 4 August 03

S1 Final splice:

1 August 03

S2 Final splice:

13 August 03

Segment 1

Segment 2

CS Maersk Recorder

CS Innovator

Breivika

Norway

Figure 5 The two cable ships deployed the cable from each end and metmidway where the ends were spliced

ISSN 0085-7130 © Telenor ASA 2004

Page 145: 100th Anniversary Issue: Perspectives in telecommunications

143Telektronikk 3.2004

toring purposes. The Tyco repeaters use state-of-the-art optical technology to achieve high performanceand reliability. The repeater amplifies multiple chan-nel signals on multiple fibre pairs and is designed fora lifetime of 25 years on the ocean floor and on boardcable ships during laying, burying and repair opera-tions.

The repeaters are designed using the minimum num-ber of components necessary. In this way, the riskof failure is minimised. Each repeater contains eightamplifier pairs to accommodate the eight fibre pairs.Each amplifier pair consists of the components neces-sary to support all wavelength division multiplexedchannels on a fibre pair. The maximum number ofwavelengths that may be used is 32. Each amplifierpair contains two erbium doped fibre amplifiers(EDFAs) that provide broadband amplification ontwo transmission paths in opposite direction. Eachamplifier also has its own power supply and operatesindependently so a failure on one amplifier pair willnot affect the amplifier pairs on the other fibre pairs.

Figure 13 shows one amplifier pair, including thelaser pump unit (LPU) and the loop back couplermodule (LCM). The LPU includes four 980 nm laserpump modules. The output is combined using wave-

Cable type Application Features

LW Benign, sandy bottom. Core cable, light protection.

Depth to 8000 m Medium-density polyethylene

white jacket, providing improved

visibility when submerged

SPA Somewhat rocky bottom; Metallic tape and high-density

Risk of shark attack polyethylene white outer jacket

Depth to 6500 m applied over core provide

additional abrasion protection

and hydrogen sulfide protection

LWA Rocky terrain; moderate Light armor wire layer applied to

risk of trawler damage. core cable(a)

Depth to 1500 m

Typically used for burial

(depth to 1200 m buried)

SA Rocky terrain; high risk of Heavy armor wire layer applied

trawler damage. to core cable(a)

Depth to 1000 m (depth

to 800 m buried)

DA Very rocky terrain; high One armor wire layer applied to

risk of trawler damage; LWA cable(a)

moderate risk of abrasion.

Depth to 400 m

(a) Tar-soaked nylon yarn is used for the outer jacket of armored cable.

Table 1 SL21 cable types

Optical fibers

in a soft gel

Polymer tube

8 ultra-high-strength steel wires

1.88 mm diameter

Water-blocking material

in steel interstices

16 ultra-high-strength steel wires

8 wires: 1.76 mm diameter

8 wires: 1.36 mm diameter

Circumferentially alternating

OD - 9.47 mm

Copper sheath

OD - 10.4 mm

Medium density

Polyethylene insulation

OD - 21.0 mmm

Same as

Lightweight

cable

Longitudinally wrapped

adhesive-coated steel tape

OD - 21.5 mm

High density polyethylene

OD - 31.8

OD = Outer diameter

Figure 9a The cables used are from the SL21 family of cables

SL21 LW (light weight) cable SL21SPA (special application) cable

ISSN 0085-7130 © Telenor ASA 2004

Page 146: 100th Anniversary Issue: Perspectives in telecommunications

144 Telektronikk 3.2004

length division multiplexing or polarisation beamcombiner as the figure shows. The 3dB-couplersplits the output of the LPU so that the LPU pumpsthe EDF of each fibre of a fibre pair. This providesredundancy as well as uniform amplification in eachdirection through the amplifier.

The LCM provides a wavelength selective, high lossloop back (HLLB). The LCM also provides a broad-band loop back for the optical time domain reflec-tometer (OTDR) and the coherent time domainreflectometer (COTDR). These are both used to mon-itor the state of the sub sea cable system. They can

Figure 9b The cables used are from the SL21 family of cables (continued)

OD = Outer diameter

Same as

light-wire

armored cable

Tar-soaked nylon yarn

OD = 37.0 mm

27 Tar-covered

galvanized wires

Each wire OD = 3.35 mm

Total OD = 34.4 mm

Tar-soaked nylon yarn

Cable OD = 39.5 mm

Same as

light-weight

cable

Tar-soaked nylon yarn

OD = 32.7 mm

Tar-soaked nylon yarn

Cable OD = 35.2 mm

16 Tar-covered

galvanized wires

Each wire OD = 4.57 mm

Total OD = 30.2 mm

SL21 SA (single armored) cable SL21 DA (double armored) cable

Hotellneset

Svalbard Burial Burial Burial

7 5 156 17 2 184 924 10 12 8 29

Land DA,b SA,b LWA,b LWA SPA LW SPA LWA SA DA

19

SA DA

5 0 80

Land

Breivika

Norway

Harstad

Norway

Summary of cable types,

quantities & spares

Type Length Spares

Land

DA

DA,b

87

34

5

2

0 100 1000 1600 2000 2500 2200 2000 1000 550 460 250 0

Land

TRS POP

Figure 10 The Straight Line Diagram (SLD) shows where the different fibre types areused. Segment 1 is used as an example

Figure 11 The cable ship installing the cable off thecoast of Breivika

ISSN 0085-7130 © Telenor ASA 2004

Page 147: 100th Anniversary Issue: Perspectives in telecommunications

145Telektronikk 3.2004

also be used to determine the location of a fault thathas occurred.

The optical amplifiers are pumped with a 980 nmpump laser source. Pumping at this wavelengthresults in an optical bandwidth of the amplifier thatis greater than 20 nm centred around 1551 nm. Thenoise figure for such a 980 nm pumped EDFA is lessthan 5 dB. Figure 14 shows a typical EDFA gain andnoise characteristics. As we can see, the noise is waybelow the signal at the operating range of the ampli-fier.

Cable to repeater

couplingRepeater end

cover

Repeater

137 cm (54”)

33 cm

(13”)

Erbium

doped

fiberWDM

coupler

Signal in

GFF

Laser pump unitWDM/polarization beam

combiner3 dB

couplerFBGs

WDM/polarization beam

combiner

98

0 p

um

p l

ase

r m

od

ule

s

FBGs

Signal out

Signal out

FBGsLBO

LBO

10 dB loopback coupler module

GFF

Signal

in

WDM = wavelength division multiplexing

GFF = gain flattening filter

LBO = line buildout

= fusion splice

= optical fiber termination

= fiber bragg grating (FBG)

Figure 13 One amplifier pair includes a laser pump unit (LPU)and a loop back coupler module (LCM)

Figure 12 The optical repeater is quite large and weighs approximately 750 kg in total, including thecouplings on each side. A total of 40 repeaters are installed

Gain compensation

The length of the EDF (Erbium-Doped Fibre) and thepump level determines the gain and noise characteris-tics of the amplifier. At very low input the gain andnoise characteristics are constant. Then when thepower is increased the gain of the repeater will even-tually start to decrease. When the input power levelis raised sufficiently the gain of the amplifier will be0 dB as shown in Figure 14. This gain compressioncharacteristics of optical amplifiers make them espe-cially suitable for transmission system applicationsbecause the power level along a chain of amplifiers ina transmission system is self-correcting and self-lim-iting. This means that if an amplifier for one reason

ISSN 0085-7130 © Telenor ASA 2004

Page 148: 100th Anniversary Issue: Perspectives in telecommunications

146 Telektronikk 3.2004

or another gets a drop in input power, the gain in thisamplifier will increase to compensate for that lowinput power. There can be different reasons for a dropin input power. Some examples could be a degradedpump in the previous amplifier, component aging, adegraded splice or a repair. This results in a lowerinput power to the next amplifier, which has to com-pensate for the low input by increasing its gain. Thefollowing amplifiers in line will compensate for thepower loss until the signal is back at its equilibriumlevel.

Mechanical design

The eight amplifier pairs are mounted on assembliesinside a high-pressure container. The material usedis copper-based alloy that withstands corrosion andsupports hydrostatic pressure to a depth of 8000 m.Cathodic protection of the housing is not necessary.The high-pressure container protects the electricaland optical elements in the repeater from the oceanenvironment and provides a high strength connectionbetween the cable terminals. The container consistsof a cylindrical body and a repeater cone in each endconnecting the body to the coupling hardware for thecable. The cable is connected to the cones using aspecial coupling. The coupling provides a path forboth dc-power and an optical connection via splicingof cable and repeater signal from the cable to therepeater.

The lives of electronic components increase withdecreasing temperature, therefore the amplifiersinside the repeater are mounted on a heat transferplate to keep the temperature down. The seawateroutside the repeater keeps the repeater temperaturedown.

To prevent the repeater from corrosion, the parts thatare exposed to seawater are made of a tested copperbased alloy. The parts carrying the electrical powerare insulated from the seawater using a thick layer ofextruded polyethylene.

The Power Feed EquipmentThe Power Feed Equipment (PFE) provides power tothe optical repeaters. The PFE is located in Breivika.In this system single end feeding is used. Systemsthat are fed from both ends are safer, but the powersupply in Breivika should provide enough redun-dancy to make the system reliable. The primaryreturn current path is the seawater and the seabed.

The PFE converts the –48 V battery supply at thecable station in Breivika to the regulated voltageneeded for the system. The PFE feeds the systemwith a constant current of 1.1 A at 2500 V. A genera-tor back-up maintains the 230 V mains voltage usedto feed the –48 V battery system in case of a mainspower failure.

In normal operation the two converters are seriallyconnected, each supplying half of the load to thecable. Each converter can supply the cable system onits own, so if a problem occurs at one converter or ithas to be stopped for maintenance reasons the otherconverter will automatically take over the load, pro-viding active redundancy. The monitor functions arealso redundant. The switches allow converters andmonitors to be reconfigured for in-service testing andrepair. In the Svalbard system the cable is a single-end feed system, but for systems that are dual-endfeed the system will automatically switch to single-end feed if one PFE fails completely.

The output from the PFE can be in constant currentmode or constant voltage mode. At normal operation,constant current mode is used.

The parameters monitored on the PFE are outputvoltage and current, station ground current, seabedvoltage and converter output voltage and current. ThePFE is monitored using the Tyco Element ManagerSystem (TEMS). There are three installed, one inHarstad, one in Longyearbyen and one at Telenor’sNetwork Operation Centre, which provides 24 hourssurveillance of the system.

To protect the undersea system the PFE will shutdown if the voltages or currents exceed certainthreshold values.

The PFE can operate on either Ocean Ground orBuilding Ground. In normal operation ocean ground

Figure 14 Gain and noise figure as a function of inputpower for the erbium doped fibre amplifier (EDFA)

25

20

15

10

5

0

Ga

in o

r n

ois

e f

igu

re (

dB

)

-35 -30 -25 -20 -15 -10 -5 0 5

Signal input power (dBM)

Gain (dB)

NF (dB)

ISSN 0085-7130 © Telenor ASA 2004

Page 149: 100th Anniversary Issue: Perspectives in telecommunications

147Telektronikk 3.2004

is used. To limit the risk of damage due to lightningstrikes an Ocean Ground Protection Panel (OGPP)is installed. The OGPP provides isolation betweenocean ground and building ground.

If there is a problem serious enough for picking up apart of the cable for repairs being necessary, the PFEcan provide a 4 to 5 Hz sine wave on its high voltageoutput. This signal can be detected by the cable shipto locate the cable on the ocean floor.

Line Terminating EquipmentThe LTE terminal equipment platform provides aninterface between the terrestrial network and theundersea system.

The LTE provides powerful Forward Error Correc-tion (FEC), precision wavelength control and highstability receivers and transmitters.

Figure 15 Power feed equipment (PFE) architecture

Element

manager

Maint.

processor

Power

stage

&

control

Conv. & MP

interface

Conv. & MP

interface

Plant A alarm &

monitor

Converter

switch

Transfer

switch

Test

load

Cable

test

PFE

short

switch

Power

stage

&

control

Converter 1

Converter 2

Plant B alarm &

monitor

Maint.

processor

To

cable

Line

terminating

equipment

Traffic

Figure 16 The Line Terminating Equipment (LTE) converts the STM-1 signal from the Telenor network into a10 Gb/s signal suitable for long distance sub sea transmission

ISSN 0085-7130 © Telenor ASA 2004

Page 150: 100th Anniversary Issue: Perspectives in telecommunications

148 Telektronikk 3.2004

The LTE uses ‘clear-channel transmission’; i.e. it isindependent of the protocol used. In our case, theinput is an SDH signal, but any protocol may be used.

The LTE processes the 10 Gb/s signal for ultra longhaul:

• Up to 10,000 km without regeneration• Dynamic pre-emphasis• Wavelength multiplexing and de-multiplexing• Dispersion compensation• Pre- and post-amplification• Automatic wavelength adjustments• Chirped return to zero (RZ) modulation

The LTE consists of the following building blocks:

• High Performance Optical Equipment (HPOE)• Wavelength Terminating Equipment (WTE)• Terminal Line Amplifier (TLA).

The High Performance Optical Equipment providesspecial transmit and receive line signal processing,the Wavelength Terminating Equipment providesoptical wavelength processing, and the Terminal LineAmplifier amplifies the signal before sending it intothe sub sea system. One HPOE is needed per channel,but two are actually used per wavelength to provideredundancy. In the Svalbard project two wavelengthsare used.

The input from the Telenor network is an SDHSTM-1 (155 Mb/s) signal. At both ends, Harstad andLongyearbyen, the STM-1 signal is fed into TellabsSTM-64 equipment. The output from the TellabsSTM-64 is sent to the LTE equipment. The STM-64signal has a bit-rate of 9,953 Gb/s. The first modulein the LTE is the High Performance Optical Equip-ment (HPOE). Each LTE has two HPOEs for redun-dancy. In the HPOE extra Forward Error Correction(FEC) bits are added to the STM-64 signal makingthe new bit-rate 10,664 Gb/s.

For the receive operation the opposite sequence isengaged.

Functions:Transmit direction:• Reception and optical/electrical (O/E) conversion

of the STM-64 input signal• FEC encoding• Insertion and encoding of overhead bytes for FEC• Insertion of user communication channels into the

FEC overhead data• Optical line signal generation• Post amplification

Receive direction:• Reception and O/E conversion of the 10,664 Gb/s

signal• FEC decoding and bit error correction• Generation of line performance parameters• Extraction of the user channel from the FEC over-

head for inter station communication• Generation of the STM-64 optical output signal

High Performance Optical Equipment

(HPOE)

The HPOE accepts any signal having a one-zero den-sity consistent with STM-64/OC-192 protocols, thebit-rate being 9,953 Gb/s. The signal is converted toelectrical, and FEC overhead bytes are added makingthe bit-rate 10,664 Gb/s.

Tellabs

6350

λ

λ

λ

λ HPOE

HPOE

HPOE

HPOE W

T

E CableSTM-1 o

W

T

E

T

L

A

FEC = forward error correction

HLLB = high-loss loopback

HPOE = high performance optical equipment

LME = line monitoring equipment

LTE = line terminating equipment

SDH = synchronous digital hierarchy

TLA = terminal line amplifier

WTE = wavelength termination equipment

SDH

or

SONET

HPOESTM-64

OC-192

FEC

rate

k

tribs

WTETo/from

undersea

equipment

To/from

LME HLLB

(if required)

λn

To/from

domestic

network

TLA

SDH

or

SONET

HPOESTM-64

OC-192

FEC

rate

k

tribs

λ1

Figure 17 The Line Terminating Equipment (LTE) used in the Svalbardcable system

ISSN 0085-7130 © Telenor ASA 2004

Page 151: 100th Anniversary Issue: Perspectives in telecommunications

149Telektronikk 3.2004

In addition to the Forward Error Correction, theHPOE provides phase modulation, automatic transmitand receive direction channel optimisation, FEC per-formance parameters and a unit alarm interface. Theoutput power is 9 dBm regardless of the input power.

Forward Error Correction (FEC)

FEC is a technique used to detect and correct digitaltransmission errors. The forward error correctioncode improves the signal to noise ratio (S/N) with4 to 4.7 dB. The HPOE adds overhead bytes used todetect and correct errors that may occur in the under-sea system. In other words, redundancy is added tothe transmitted signal.

The performance of the FEC is shown in Table 2.

Wavelength Termination Equipment (WTE)

The Wavelength Termination Equipment (WTE) isused to provide optical wavelength processing. TheWTE does automatic wavelength pre-emphasis,wavelength multiplexing and demultiplexing, side-tone LME wavelength combining (the Line Moni-toring Equipment is explained later), and wavelengthdispersion compensation. Figure 20 shows the WTE’slocation, while Figure 21 shows a more detailed view.

The Optical Pre-Emphasis Equipment (OPEE) pro-vides channel equalization by adjusting the opticalpower of each channel prior to transmission such thatall have the same end-to-end line error performance.The OPEE maintenance circuit automatically deter-mines the appropriate channel pre-emphasis basedon detected FEC error.

Because the response of the sub sea system is knownthe output signal can be adjusted for this before itreaches the undersea system. This is called pre-emphasis. Figure 22 shows the principle of this.

The optical filters, splitters and combiners are passivecomponents as shown in Figure 23.

Terminal Line Amplifier (TLA)

The Terminal Line Amplifier (TLA) provides high-availability optical, wideband gain. The TLA may beused in constant output power mode or constant gainmode. The TLA can be used both in transmit andreceive directions for post- and pre-amplification,respectively. The TLA contains two laser pumps, oneis a 980 nm at 150 mW constant output and the otheris a 1480 nm with a maximum output of 160 mW.The optical signal amplification is done by usingerbium-doped fibre amplifier (EDFA) technology.This is a well-developed method for amplification ofoptical signals without optical to electrical conversionand regeneration of the signal. The TLA has two

Laser

S

RCVR

FEC

ENC

L

XMTR

OPT

AMP

009AA1 MAINT

S

XTMR

FEC

DEC

L

RCVR

From

SDH/SONET/IP

terminal

equipment

user channels

To

wet plant

via

WTE/TLA

Receive HPOE

From

SDH/SONET/IP

terminal

equipment

user channels

From wet

plant via

WTE/TLA

office alarms, CIT, OS

Transmit HPOE

Figure 18 The High Performance Optical Equipment (HPOE)

“B” ...as in “Boy” “D” as in Boy ???

Figure 19 By using FEC you get increased system capacity, longertransmission distances are possible and repeater spacing may beincreased. The optical power can be lower, the pump power gets lowerand there are less non-linear impairments

Feature Specification or applicable standard

Line bit error rate Delivered bit error rate

10–4 ≤ 10–9

10–5 ≤ 10–13

10–6 ≤ 10–17

Error correction protocol Reed-Solomon 14/15, per ITU-T G.975

Table 2 Specifications and performance of the Forward ErrorCorrection (FEC) scheme

ISSN 0085-7130 © Telenor ASA 2004

Page 152: 100th Anniversary Issue: Perspectives in telecommunications

150 Telektronikk 3.2004

stages. The first stage holds the 980 nm laser pumpand gain-flattening filter, and the second stage holdsthe 1480 nm laser pump. Passive components areused in the TLA because they are usually more reli-able than active ones. The TLA also has a mainte-nance circuit pack, which monitors the components,

Wavelength

termination

equipment

(WTE)

λ1

λ1, 2

, ...., n

To/from

submerged

plant

λ2

λn

High-speed

interfaces

WDM line terminating equipment

HPOE

Terminal

line

amplifiers

(TLA)

= one fiber pair

HPOE

HPOE

Figure 20 The Wavelength Terminating Equipment (WTE)

Optical filter

Optical

pre-emphasis

equipment

(OPEE)

wavelengths 1-8

λ1

λ2

λ8

Optical filter

Optical filter

Optical filter

Optical filter λ16

Sidetone

LME

coupler

(option)

To submerged

plant

(direct or

via TLA)

Sidetone

carrier

signal

From second

OPEE shelf

(wavelengths 9-16)

as required

From HPOEs

or

SDH/SONET

terminal

From

sidetone

LME

(option)

Optical

combiner(s)

λ1

λ2

λ8

λ9

OS

CIT

Office alarms

λ1λ

3, , , ...

Figure 21 Detailed diagram of the WTE

records the output power, bias current and semicon-ductor chip temperature. Any value outside the limitsof normal operation will trigger an alarm.

Multi Side-Tone Line MonitoringEquipment (MST LME)The LME is used to monitor the condition of theundersea system. The LME consists of an equipmentrack, a computer and a workstation. The LME oper-ates together with the loop-back coupler modules inthe repeaters and in the LTE (Line TerminatingEquipment). The whole system is called the LineMonitoring System (LMS). At start-up of the sub seasystem a delay calibration has to be performed. TheMulti Side-Tone (MST) LME injects an optical2.5 MHz, three level (–1, 0 and 1) pseudo randomsquare wave signal directly into the outgoing fibrethrough the LTE into the sub sea system. The injectedsignal is called the LMS signal. The injected signalcomprises two wavelengths which are different fromthe ones used for the transmission itself, hence thename MST LME. The LTE and all the repeaters con-tain a ‘high loss loop-back’ coupler module, whichsends a fraction of the LME signal back in the receivedirection and back to the LME. The location andloop-gain of each repeater and LTE is measuredduring the time the signal takes to reach a particularrepeater. The results are stored in a database as a ref-

ISSN 0085-7130 © Telenor ASA 2004

Page 153: 100th Anniversary Issue: Perspectives in telecommunications

151Telektronikk 3.2004

erence to future monitoring runs. In this way we canlook for changes that may indicate a problem with theundersea system. If the system is altered in any way,a new baseline has to be established. Periodic moni-toring runs are performed. If a fault has occurred inthe undersea system the monitoring run will differfrom the baseline. The test-run results can be anal-ysed manually by comparing the signature from the

test-run with a library of fault scenarios. This willgive us the opportunity to determine what the faultmay be and where it may be located. The LME alsohas an automatic signature analysis function. Thefaults that may be determined are laser-pump failures,fibre breaks, cable breaks or changes in gain and loss.One of the wavelengths from the MST LME is placednear or below the lowest traffic wavelength and one

Power

in

Gain

Wavelength Wavelength Wavelength

Wavelength Wavelength Wavelength

No Pre-Emphasis:

With Pre-Emphasis:

SNR

out

Power

inGain

SNR

out

Figure 22 Principle of pre-emphasis

Transmit path (multiplexing)

Group

Filter

DC

F

Int

Ftr

To LME

DC

F

To other block

of channels

From

HPOE Tx

λ1

λ2

Wavelength combiners

λ3

LME

coupler

From LME

To

undersea

transmit

fiber

λ

λ

λ

From

HPOE Tx

From

HPOE Tx

TLA

TLA TLA

From

undersea

receive

fiber

“Odd” channels

“Even” channels

To

HPOE RxDC

DC

To

HPOE Rx

To

HPOE Rx

DC

λ1

λ2

λ3

Receive Path (demultiplexing)

Figure 23 Optical filters, splitters and combiners

ISSN 0085-7130 © Telenor ASA 2004

Page 154: 100th Anniversary Issue: Perspectives in telecommunications

152 Telektronikk 3.2004

near or above the highest traffic wavelength. In oursystem only one wavelength is used for traffic, but inother sub sea systems engaging more wavelengths,this method is used.

Repeater High Loss Loop Back (HLLB) paths

Each repeater has a High Loss Loop Back (HLLB)path used for the LMS signal as shown in Figure 24.

Using optical couplers, the signal is looped back afterthe amplifier in the repeater. In case one of the opticalpumps of each amplifier fails the loop gain will bechanged and the LMS signal will have a differentshape baseline telling us that we have a problem witha repeater. The laser pumps are very reliable and one

failed pump does not require the repeater to be pickedup for repair. The repeaters after the one with a faultwill restore the signal strength up to the normal value.The MST LME may be used to monitor the underseasystem both in service and out of service. For out ofservice monitoring a more powerful LMS signal maybe used.

The management systemThe management system used for the Tyco equip-ment is called TEMS (Tyco Element Manager Sys-tem). The TEMS provides network managementfunctions for the line monitoring equipment (LME),the line terminating equipment (LTE) and the powerfeed equipment (PFE). The 24-hour surveillance isdone from Telenor’s Operation Centre at Fornebu,while monitoring runs and other management func-tions are done from Longyearbyen and Harstad,where the two other TEMS workstations are located.All functions cannot be performed from the TEMS;some have to be done using a laptop computer direct-ly connected to the equipment in the cable station.

SummaryA fibre optic telecommunications cable was deployedbetween the Norwegian mainland and Svalbard in2003, and put into operation in January 2004. Thearticle has provided an overview of the technologyused together with explanations of the different build-ing blocks of the complete transmission system. Thecable and additional equipment have an expectedlifespan of 25 years and the current capacity utilisa-tion is 10 Gb/s.

Eirik Gjesteland (29) graduated from Kongsberg Technical College in 1998 and has a BA with honours in

Electrical and Electronic Engineering from Heriot-Watt University, Edinburgh, Scotland from 2000. He has

since then been working at the Operational Centre of Telenor Networks.

email: [email protected]

Optical coupler

LBO FBG1 FBG2

Amplifier A

LBOFBG1FBG2

Optical coupler Amplifier A

LBO = Line buildout (loss)

FBG = Fiber bragg grating

Figure 24 The repeater High Loss Loop Back (HLLB) path

ISSN 0085-7130 © Telenor ASA 2004

Page 155: 100th Anniversary Issue: Perspectives in telecommunications

153

Mobile communications have expanded its groundsfrom covering pure communication needs to becomea lifestyle product, and Telenor has become one ofthe business’ major players, even on a global scale.In Norway mobile phone is a natural first choice tomany people. But it was not written in the stars thatthe future of wireless communications should becomethat bright. I am therefore pleased that the EditorialBoard invited Telenor to highlight its contributionsto the development of the GSM standard and reflectupon why GSM became such a success.

Telenor (formerly Televerket) has a long record inoperating public mobile telephone systems. A manualsystem in the 160 MHz band opened in 1967 (OLT).The service became quite popular, and the systemsoon ran out of capacity. A few years later a supple-mentary system was introduced in the 450 MHz bandin some heavy traffic areas in the southern part ofNorway. The introduction was coordinated withneighbouring countries which experienced similarcapacity strains.

The request for more capacity was uniform in theNordic countries. Hence the idea to develop andestablish a common automatic mobile telephonysystem was born. The system was called NMT. An

important requirement was that the system shouldoffer international roaming throughout the Nordiccountries.

The appreciation of OLT and NMT in the Norwegianmarket revealed some basic knowledge that becamecrucial for Televerket in the years to come.

• The market potential of mobile communicationsseemed substantial, but unpredictable.

• Coverage is crucial, both in terms of area coverageand in terms of services offered, i.e. seamlesshandover, international roaming, and voice mailservices.

• The steady miniaturisation of electronic com-ponents and the ever lasting increase in processorspeed imposed some inherent market drivers thatare particularly valuable in mobile communica-tions, for success so much relies upon the abilityto create mobile applications.

• The operator’s best salesmen are the mobile usersthemselves and the importance of forefront userseager to exploit new technology should not beunderestimated. Equally important is the need toidentify user groups inclined to generate trafficvolume.

Two years ahead of opening the analogue NMT sys-tem to the public in 1981, Televerket launched someinitiatives to study what would be the impact of digi-tal treatment of speech, modulation and coding onmobile communications. Those activities were tightlyconnected to an advanced expert community at theNorwegian Institute of Technology (NTH).

At the same time Televerket once more joined forceswith its Nordic sister operators to study what mightbe the best strategy for the Nordic countries formobile communications after the NMT era (FMK,“Fremtidens Mobilkommunikasjon”). In parallel,Televerket took some initiatives within ITU CCIRand CCITT to engage those bodies in the specifica-tion of global standards of vital importance for thesmooth operation of automatic international mobilecommunications systems. I myself was appointedSpecial Rapporteur on mobile questions in CCITTStudy Group XI (ISDN and telephony swiching andsignalling), a position I held from 1980 to 1992.

The engagement of Televerket in the specificationof GSM

A S H O R T I N T R O D U C T I O N B Y B J Ø R N L Ø K E N

Bjørn Løken is

Director in

Telenor Nordic

Mobile

Telektronikk 3.2004

Figure 1 An extract of the organizational structure of the GSM projectaround 1988. GSM Main Body was the group to approve or reject thework items or specifications proposed by the four working parties.Below the working parties, there would be ad hoc groups with specialassignments. Three of those groups are depicted in the figure; SCEG(Speech Coding Experts Group), L3EG (Layer 3 Expert Group), andDGMH (Draft Group on Message Handling). The colour codes indicatewhich groups in which Televerket was a) represented, b) had editorresponsibilities, or c) had the chairmanship

GSM main body

WP1 -

services

WP2-

radio aspects

WP3 - netw

aspects

IDEG / WP4

data services

Permanent

Nucleus

SCEG L3EG DGMH

:Participation Editorship Chairmanship

… …

ISSN 0085-7130 © Telenor ASA 2004

Page 156: 100th Anniversary Issue: Perspectives in telecommunications

154 Telektronikk 3.2004

When the GSM initiative was launched Televerketfound itself in the happy situation of possessingexperts in many different fields that appeared to becrucial in the GSM work. These resources whereoffered to the GSM group. In my opinion Telever-ket’s efforts made a significant contribution to thesuccess of GSM. Indeed, without operational experi-ence and convincing expertise we would have had amuch harder time advocating for a system equallywell suited for both urban and rural areas like thosewe find in the Nordic countries.

The organisation of GSM is depicted in Figure 1.

From Televerket Jan Audestad, Petter Bliksrud andmyself participated in the main body. When theworking parties were established, Jan became chair-man of WP3. Several experts joined WP2 and tookpart in the endeavours therein, in particular TorleivMaseng and Rune Rækken. Helene Sandberg andHans Myhre from the operational division of Tele-verket’s mobile activities joined WP1. Jon EmilNatvig, being at that time a distinguished experton speech coding techniques, became chairman ofSCEG. Knut Erik Walter entered WP3 and became

heavily involved in its work. Finn Trosby enteredIDEG (later WP4), and became chairman of DGMH,one of its four drafting groups. Stein Hansen enteredthe Permanent Nucleus in Paris (PN) and becameeditor of several of the specifications that becameassignments of PN.

Many of the individuals mentioned above are amongthe authors of the articles to follow. In addition, wehave been so lucky as to get the chairman of the GSMproject itself, Thomas Haug from the Swedish Tele-verket (now TeliaSonera) to write about how GSMcame about.

It is for the historians to judge what the future hadheld for mobile communications in Europe withoutheavy involvement of Televerket and the other tele-com operators in the GSM specification work andtheir firm commitment to launch services based uponthe emerging standards (MoU). Having reached suchan overwhelming penetration worldwide, it is in myopinion obvious that the efforts have paid off.Telenor – being the 12th biggest mobile operationcompany in the world and entirely basing its businesson GSM – has indeed benefited from it.

Bjørn Løken (57) graduated from the University of Oslo in 1972. After graduation he joined Norwegian Com-

puting Center as a researcher on teletraffic simulation models. In 1975 he was employed by the Research

Department of Televerket, and in the following five years he worked on specification and testing of the NMT

system. In the 1980s he was engaged in the CEPT/ETSI specification work on GSM. From 1980 to 1992 he

also held a position as Special Rapporteur on mobile questions in CCITT Study Group XI. From 1987 to 1992

Bjørn Løken was Chief of Research with responsibility for network development for both fixed, mobile and

data communications. From 1992 to 1995 he was Director of Strategy in Televerket. Since 1995 Bjørn

Løken has been with Telenor Nordic Mobile, primarily concerned with questions related to telecom

regulations.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 157: 100th Anniversary Issue: Perspectives in telecommunications

155

The basis for the workThe European telecommunications market was for along time badly fragmented and in many areas lack-ing universally applied standards despite efforts byi.a. ITU. As a result, economy of scale could fre-quently not be achieved. In an attempt to remedy thatsituation, the organisation of 26 Western EuropeanTelecom Administrations (CEPT) in 1959 startedstandardisation activities, but the progress was oftenhampered by differences in policy in the membercountries. As time went by, the fragmentation inthe rapidly growing mobile communications fieldbecame particularly annoying because the very ideaof mobile communication is of course to communi-cate while on the move, and the incompatibility madethis impossible while visiting a foreign country. (Oneexception was the NMT system.) The spectrum avail-able for new mobile communication systems wasvery limited in frequency ranges suitable for this pur-pose, among other things because of the large amountof spectrum reserved for radio and television broad-casting.

At the meeting of the World Administrative RadioCommittee (WARC) in June 1979, it was realizedthat it was desirable to allocate a part of the 900 MHzband for land mobile communication in Zone 1(which in the terminology of the Radio Regulationsmeans Europe). The reason for this decision was thatthe 900 MHz band, which up to that time had notbeen extensively used, now was becoming attractivebecause of the technical development . A large blockof spectrum was therefore reserved for land mobileuse, but nothing was said about the character of thesystems, whether they should be private or public,automatic or manual, analogue or digital.

CEPT discussed for a while the opportunity thuscreated. At the CEPT Telecommission meeting inJune 1982, the Netherlands and the Nordic TelecomAdministrations presented a proposal, stating thatbecause of the rapidly increasing demand for mobilecommunication, the frequency band allocated byWARC’79 was in danger of being put to use for

national and probably incompatible systems. Thiswould almost certainly remove the only opportunityfor the coming decades to achieve a form of harmoni-sation in Europe in the field of public land mobileservices, since frequencies above 1 GHz were not atthat time considered suitable.

The Telecommunication Commission decided toentrust its Harmonisation Committee (CCH) withthe task of coordinating the activities of CEPT inthe field of mobile services. A special study groupshould be set up, reporting to CCH and working outan action plan with the aim of ensuring that compati-ble mobile systems could be implemented by theearly 1990s. The new study group was given thename Groupe Spécial Mobile (GSM). The work wasto be completed by 1986, and the Commissionaccepted an offer from the Swedish Administration tosupply the author of this article as Chairman. (I wasin a meeting somewhere else and was quite surprisedwhen I heard this, since nobody had asked me inadvance.)

The decision made by the Commission was quitevague, and in effect left it to the new group to pro-pose its own terms of reference. Representatives ofthe Netherlands and the Nordic Administrationstherefore met during the summer of 1982 in order towrite a proposal for an action plan, which was sub-sequently approved by CCH in November that yearand used (Doc. GSM 2/82) as the basis for the workof GSM for many years.

The decision only mentioned “harmonisation”, whichindicates that the compatibilty aspect was the domi-nating factor behind the decision, and few delegatesat the T-Commission meeting sincerely believed thatfree circulation of radio users across internationalborders could be achieved within the foreseeablefuture, given the serious political and military obsta-cles that existed at the time. In fact, later commentshave indicated that in view of the many failures inEuropean standardisation, many doubted that a com-mon system could be agreed at all.

How it all began

T H O M A S H A U G

Thomas Haug,

formerly of the

Swedish Tele-

communications

Administration,

now retired

Telektronikk 3.2004

This article describes the background for the European Telecom Administrations’ decision to start the

work on the GSM mobile communication system and the fundamental issues the committee was faced

with in the first five years, as well as how they were dealt with.

ISSN 0085-7130 © Telenor ASA 2004

Page 158: 100th Anniversary Issue: Perspectives in telecommunications

156 Telektronikk 3.2004

The startFrom the start, it was clear that many more aspectsthan harmonisation had to be addressed if the goal ofa modern Pan-European service was to be reached.Clearly, the goal envisaged by the CEPT Telecom-mission and the CCH of completing the work by theend of 1986 could not be met if important new devel-opments were to be taken into consideration. It wastherefore decided that by the end of 1986 only an out-line specification should be available, comprising themain system parameters for the various parts of thesystem and their interfaces, including the air inter-face.

The list of basic requirements for the new system wasto a large extent patterned on the corresponding listfor NMT, but with a number of modifications dueto the fact that there would be a huge number of tech-nological advances that had to be exploited, and ofcourse also the necessity for the new system to satisfythe needs of a far larger community than NMT allover Western Europe. The need for services otherthan speech was stressed, but since nobody knewexactly what those services would be, the systemstructure had to be modular and flexible. The samephilosophy as for ISDN and OSI should be applied inorder to achieve this, and standards for protocols etc.should as far as practicable be compatible with suchdevelopments. Furthermore, the system must providethe same facilities as those offered in the public tele-phone and data networks, and the need for securitymust be taken into account, all of this without signifi-cant modifications of the existing fixed networks.

Hardly anyone among us expected that an analoguesystem would be preferable since the technology wasclearly moving toward digitalisation. It had to beproved, however, that a digital system would meetthe needs better than an analogue system. The digitaltechnology had never before been tried in terrestrialpublic mobile communications, and it was necessaryto convince the operators and manufacturers that adigital system was a realistic goal. Obviously, if “har-monisation” had been the only goal, an analogue sys-tem could be based on well established techniquesand could therefore be specified and built in a shorttime. This may have been in the minds of some of theCEPT T-com delegates, whose main concern was tomake sure that the allocated spectrum would be usedfor a harmonised system without paying much atten-tion to the technical solution. The choice of analoguetechniques would mean, however, that many desir-able objectives could not be met, such as ease ofintroducing modern services, compatibility with theemerging digital fixed networks, etc., and it was clearthat the digital technique had a far greater potentialthan the analogue one for mass manufacturing pro-

cesses. To establish the feasibility of digital techniquein the mobile network was therefore among the firststeps of the work in order to establish a truly modernsystem. The greatest problem would be caused by theneed to achieve a satisfactory speech quality and atthe same time conserve spectrum.

The first meeting of GSM was held in Stockholm inDecember, 1982. One of the most important issues atthat meeting was the spectrum situation in the CEPTcountries, and there was great concern that it mightprove impossible to keep the 900 MHz band “on ice”for GSM for such a long time as envisaged in thework plan for GSM. The feasibility of a standardisedsystem was thus in great danger, since it was unreal-istic to expect the manufacturers to invest in a systemwith a very uncertain market potential. This point wasa constant concern for several years until the EUDirective reserved the band for GSM.

Another important issue was the question of free cir-culation of users. The existing regulations in manyEuropean countries regarding the use of radio equip-ment by foreign visitors would clearly be a seriousobstacle to the system since free circulation would beone of the great advantages in a common system. Toeliminate such obstacles would therefore be amongthe most important tasks of the committee, but itwould be a very difficult task since those obstacleswere usually outside the authority of the nationaltelecom administrations.

Some assumptions had to be made for testing thesolution, regardless of the choice of analogue or digi-tal techniques, and one such assumption must be atraffic model. This was a very important issue duringthe first year of the group. In retrospect, of course,those traffic figures look almost ridiculous today.

Mode of operationThe written and unwritten rules of CEPT were inmany ways very different from those generally usedin purely national bodies. As usual in CEPT, we hadto work on the basis of consensus, so there would beno voting. Earlier experience in CEPT and other stan-dards bodies had shown that making recommenda-tions by majority decision could easily lead to a situa-tion where some countries chose to disregard the rec-ommendations, and the compatibility was then lost.(It is correct to say that the possibility of voting in afew special cases was introduced towards the end ofthe 1980s, but it was never used by CEPT/GSM.)Consensus is not the quickest way to reach a deci-sion, but on the other hand, a consensus makes italmost certain that everybody is going to stick to thedecision made.

ISSN 0085-7130 © Telenor ASA 2004

Page 159: 100th Anniversary Issue: Perspectives in telecommunications

157Telektronikk 3.2004

The choice of working language was of great impor-tance for the efficiency of the work. In CEPT, bothEnglish, French and German were established asworking languages, and the committees and manylarge working groups therefore worked through trans-lators. In GSM, we agreed that it would be far betterto work in English only, since just about everybodyin the technical field had a sufficient command of theEnglish language, and since working through transla-tors in such a highly specialised field would actuallyhamper the work. Furthermore, making arrangementsfor translators and interpreters for every meetingwould involve practical problems and major costs.The language question turned out to be a politicallysensitive issue in certain countries, however, and therespective delegations had to do some arguing athome in order to convince their managements. Theysucceeded, and English was accepted somewhatreluctantly as the only working language. The issueresurfaced after a few years but the rules were thenfirmly established and nothing happened.Without thisdecision, we would never have been able to reachconsensus within the time frame agreed.

The GSM committee had to utilise other bodies in-and outside CEPT as much as possible. WithinCEPT, there were many working groups with expertsin the fields of radio issues, signalling, data transmis-sion, commercial issues, etc. In addition, the GSMcommittee liaised with several bodies outside CEPT,such as various COST groups and industrial bodieslike EUCATEL. The expertise in those groups wasvery useful for the work on the new system. Anotherbody with which we had frequent contacts was theEuropean Commission, which on the political levelproved to be very helpful. Above all, through theiractivities, the GSM frequency bands (890–915 and935–960 MHz) were reserved for the coming Pan-European system.

It soon became clear that it would not be possible todeal with all issues in the Plenary, so GSM decided toset up three ad-hoc sub-working parties, reporting tothe plenary, to deal with the three major areas of ser-vices and facilities, radio aspects and network. As isusually the case in most organisations, the increasingworkload made it necessary to increase the number ofworking parties and to make them more independent,and in 1986 we had to set up a Permanent Nucleus inParis in order to support the work, both in the Plenaryand in the working parties.

The question of IPRs, often a difficult issue in stan-dardisation bodies, arose early in the work. In linewith the usual CEPT strategy, it was decided to avoidpatented inventions unless they could be made avail-able royalty-free for the CEPT standards, and we suc-

ceeded in acquiring solutions along those lines fortwo areas, i.e. speech encoding and voice activitydetector. After 1988, the question of IPRs was takenover by the MoU and it assumed a much greaterimportance.

Crucial and controversial IssuesTwo crucial issues during the first years of the GSMwork, i.e. analogue versus digital techniques and theaccess and modulation method in a digital systemrequired a lot of work. Proposals were invited and noless than eight were submitted for testing, which wasdone in Paris in late 1986 by CNET assisted by theGSM Permanent Nucleus. The results were discussedin a meeting in Funchal, Madeira in February 1987.The group found unanimously that the digital tech-nique offered great advantages over the analogue one,and the measurements showed that the requirementfor speech quality could be met with a good spectrumeconomy. Controversy arose around the second ques-tion, however, since France and Germany had pro-posals which did not agree with those of the otherCEPT countries. This was due to political decisionsmade in France and Germany, not to any disagree-ment on the part of the delegates present, who wereprivately in agreement with the rest of CEPT/GSM,but they had to follow the line taken on the highestpolitical level at home. Despite endless discussions,the issue could not be resolved in the meeting, so onthis fundamental question there was a serious risk offailure of the entire GSM effort. During the followingmonths, however, a considerable amount of pressurewas put on the governments in the two countries,above all by the European Commission, which wasvery anxious to get a Pan-European system. As aresult, in the next meeting in June 1987, France andGermany informed us that they had agreed to thesolution accepted by the others. Thus ended the onlyreally serious controversy that we had in CEPT/GSM,and we were now certain that there would be just oneEuropean system. Without agreement on that point,the Pan-European system of today would not exist.

Conclusion of the first phaseThe 1987 decision was a milestone in the work on thesystem, since it eliminated the only major controver-sial issues and the road towards the final goal now layopen. There were many changes to our work arrange-ments, such as the joining of ETSI in 1989, but basi-cally the established organisation with working par-ties, Permanent Nucleus, etc. stayed although thenames were changed. As examples of those changesone should note the signing of the Memorandum ofUnderstanding in September, the opening of CEPTgroups to the manufacturing industry and later the

ISSN 0085-7130 © Telenor ASA 2004

Page 160: 100th Anniversary Issue: Perspectives in telecommunications

158 Telektronikk 3.2004

creation of ETSI. Without the commitment of thosebodies, hardly any manufacturer would have dared toinvest the huge sums required for the development ofthe system. Still, five years of detailed work remainedto be done. There is no doubt that in terms of man-years, they represent a far greater part of the workthan Phase 1 and was basically a detailed design pro-ject, which cannot be said of Phase 1. The support bythe operators, national regulating authorities andmanufacturers grew, and a huge number of peoplewere gradually enlisted. However, that part of thework is well known and it would lead too far to gointo it here.

The interested reader will find it in [1].

Reference1 Hillebrand, F et al. GSM and UMTS, The creation

of Global Mobile Communications. London, JohnWiley, 2002.

Thomas Haug (77) was employed by the Swedish Telecommunications Administration (now Telia) from

1966 until his retirement in 1992. He was Chairman of Groupe Spécial Mobile in CEPT, later ETSI TC/GSM,

from its inception in 1982 until the start of the system in 1992, and in that capacity he led the work on the

specifications for the GSM system. From 1970 to 1978 he was Secretary of the joint Nordic NMT committee

which worked out the specifications for the NMT system, and from 1978 until 1982 he was Chairman of the

NMT committee.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 161: 100th Anniversary Issue: Perspectives in telecommunications

159

FT: In which time period did you take part in theGSM work on services, and what became your mainarea of objectives?

HS: I entered the Working Part 1 of the GSM activi-ties in CEPT and later ETSI in 1986 and stayed withthe work items of WP1 – later GSM1 – until 1990.One of the major objectives in WP1 at the time ofmy entrance was a ‘modification’ of the current draftsfor supplementary services, and I was fortunate tobecome heavily involved in the work with those.

At approximately the same time, I also took part inactivities in the MoU; mostly in the subgroup forbilling and accounting (BARG).

FT: Why was it necessary with a ‘modification’?

HS: The work on specifying the supplementary ser-vices had commenced some time before I enteredWP1 and just before the specifications of supplemen-tary services for ISDN were stabilized. Unfortu-nately, WP1 had gone on quite rapidly with the Stage1 definition of the supplementary services for GSMwithout paying attention to the fact that there wouldsoon be a complete and approved set of such forISDN. When that was the case, a decision was madeto review the whole portfolio of supplementary ser-vices for GSM to align them with the ones of ISDNas much as possible. In fact, ‘redoing’ might be a bet-ter word than ‘modifying’, because almost all of theold stuff was replaced by new ISDN ‘alignments’.

FT: How easy was it to find replicas of each of thesupplementary services of ISDN for GSM?

HS: For every supplementary service that did nothave mobility or roaming implications, the task wasfairly easy. For those that did have mobility or roam-ing implications, the objectives were harder. In someof those cases – e.g. with the Advice of Charge – theservice got a somewhat different interpretation inGSM than it had in ISDN. And in some other cases, asupplementary service of ISDN did simply not applyin GSM. However, I remember that for some of those– e.g. Call Transfer – it took quite some time andeffort to remove them from the list of supplementaryservices!

FT: Retrospectively, how would you assess thesuccess of the service portfolio of GSM?

HS: This is a question that is difficult to answer, butlet me try to comment on the GSM services from dif-ferent angles.

In my mind, the success of GSM was due to twobasic factors –• The scheme of agreements and functionality to pro-

vide for a smooth way of international roaming;• SMS, turning out to be an extremely popular ser-

vice to supplement telephony.

Defining the fundamentals of the GSM system co-incided with one of the last hopes of substantial inno-vations within the domain of circuit switching: ISDN.I do not think that ISDN deserves the designation ofa regional or global ‘success’ – far from it. However,one has to admit that the only sensible and viablestrategy of the GSM design in the last half of the1980s was to align the service repertoire of GSMwith what was anticipated to be the future bandwagonof fixed network communications.

In terms of supplementary services, I think that theforwarding services, the barring services and theCalling Line Identification Presentation (CLIP) havealso been vital elements of the GSM success.

FT: When taking into account what has happenedafterwards, which is the one additional task orservice you would have appreciated that the WP1crew of those days had incorporated?

HS: What I’m immediately thinking of is the task toprepare prepaid solutions for roaming. Even if thatperhaps is a MoU/GSMA matter more than aGSM/WP1 matter and even if prepaid was not at alla mature concept in those days, the prepaid businessdomain has proven to be so extremely important thatthis task is the one that springs to mind. Having beenable to hasten the launch of those services on a broadscale would have meant a lot to the economy of theoperators.

My work in the GSM services area

A N I N T E R V I E W W I T H H E L E N E S A N D B E R G B Y F I N N T R O S B Y

Helene

Sandberg is

Vice President

in Telenor

Nordic Mobile

Telektronikk 3.2004 ISSN 0085-7130 © Telenor ASA 2004

Page 162: 100th Anniversary Issue: Perspectives in telecommunications

160 Telektronikk 3.2004

FT: How would you assess the way of working inWP1; how were the discussions and the ability toresolve its objectives? How was the collaborationwith other bodies inside and outside GSM? Who werethe most profiled members of WP1 in those days?

HS: I think that the working procedures of WP1 werefairly good and efficient. I recall discussions in someof the bodies outside the body hierarchy underneathGSM, e.g. MoU, in which the ratio between the urgeto keep on discussing and the importance of the sub-ject were quite stunning. For instance, there was adiscussion on the shape of the arrow of the SIM cardsthat raised very strong opinions. In WP1, however,the economy of the man-hours of the attendants wasfar better.

Another risk of a working group structure like the oneof GSM is that the technical bodies will be left idlewaiting for Stage 1 descriptions and will start produc-ing those themselves. I did not recognize any suchduring my time in WP1. I would say that WP1 had afairly good grip on things and that the collaborationwith other groups went on fairly well. It should bementioned however that WP1 deliberately did givethe technical groups quite some latitude in terms ofimplementation. For instance, I do not think that therewas very much contact between WP1 and the speechcoding guys in SCEG.

In a group of this nature, the chairman will normallybe the most or one of the most profiled persons. InWP1 this was also the case with our chairmanMadame Martine Alvehrne from France Telecom.Among the other participants who caught your eyeand ear, I would like to mention Alan Cox fromVodafone (at that time Racal/Vodafone). I think hewas on full time catching up with anything withinGSM that contained service aspects, and he combinedhis competence in this field with the clear businessstrategy that has characterized Vodafone ever since.

FT: Finally, would you like to add some more to theoverall characteristics of WP1?

HS: For people who may have an interest in compar-ing the standardization of those days with the one oftoday, I think WP1 emphasized a quality of the wholeGSM: It was all very much the mobile operator’sballgame. In WP1 of the years of my attendance, onlyoperators were allowed to participate. In the technicalgroups, vendor representatives gradually seeped in.At first, they were only allowed to speak, after awhile they were given the same rights as the repre-sentatives from the operator industry. But the founda-tion walls of the GSM, i.e. the basic specificationswith version number 3.0.0 and slightly above wereproduced under strong influence of the mobile opera-tors. Without speculating any further, I would saythat is worth some considering.

Helene Sandberg has been employed at Telenor (formerly ‘Televerket’) since 1966 and has more than 20

years experience from mobile operations both domestically and abroad. She participated in the launch of

Esat Digifone in Ireland and Connect Austria (One) in Austria. She returned 18 months ago from Thailand

where she had been COO for two years. She is now Vice President in Telenor Nordic Mobile with a responsi-

bility for Service & Delivery.

ISSN 0085-7130 © Telenor ASA 2004

Page 163: 100th Anniversary Issue: Perspectives in telecommunications

161

Why not wideband?At that time the common thinking was to transmitsymbols so slowly that radio propagation reflectionsfrom mountains and buildings did not matter. Thismeant that the rate should not exceed 25 kbit/s. If thisrate was exceeded, the symbols from various pathswould be sufficiently shifted in time to overlap withthe next symbol and make it difficult to decode1).The problem with this idea was that in such a narrowband system, the reflecting paths would sometimescancel each other and cause the signal to disappear.

Our ideaIf the bandwidth of the channel was sufficiently wide,however, the various paths would not overlap andcould be distinguished. Only when there was no pathwould the signal disappear. My idea was to measurethe channel continuously using radar principles andcalculate how the signal would have been received,given the channel, for various data sequence alterna-tives. The receiver would continuously choose thesequence which most closely resembled the receivedsignal. This task is called equalization. There is asimple and efficient implementation of this operationcalled the Viterbi algorithm. The idea was to mergedemodulation, equalization and error correctingdecoding in a joint Viterbi decoder. [1–4] The smartthing was that the modem would not make instantdecisions about received bits, but defer decisions andobserve the received signal until typically 4 bits hadbeen observed and then make a decision on the bitvalue. As the number of bits which could be consid-ered increased, the modem could handle longer andlonger echoes or higher bit rates, but at the same timethe complexity of the modem grew exponentially.

Since we did not know which bit rate that was thebest choice, Odd Trandem at ELAB designed anadaptive modem shown in Figure 1. This systemcould transmit at a bit rate of 256, 512, 1024 and2048 kbit/s and could display the instant channelresponse on an oscilloscope. It could cope with up to4 bits propagation delays difference between allpaths. If this was exceeded, that part of the signalwould appear as interference. Fortunately the longerthe signal propagates, the weaker it becomes. There-fore this interference is normally small.

Wideband or narrow band?World championships in mobile radio in Paris 1986

T O R L E I V M A S E N G

Torleiv Maseng

is Director of

Research at the

Norwegian

Defence

Research

Establishment

Telektronikk 3.2004

The story I will tell started in 1981 when I came back to ELAB/SINTEF in Trondheim after exile in the

Netherlands working for SHAPE Technical Center in The Hague. My assignment was to propose a new

digital mobile system replacing the successful analogue Nordic Mobile Telephone system. The new

digital system which was to be defined was coined “Fremtidig Mobilkommunikasjonssystem” (FMK).

Jan Audestad at Televerkets Forskningsinstitutt at Kjeller outside Oslo had formulated the task and

given the project to ELAB.

In the beginning, this was a lonely assignment since I worked alone and few others took part in our

project. Audestad did his best in linking us with other activities in the Nordic countries. The work

towards FMK was arranged through Nordic meetings in various subcommittees.

1) This is the same reason why psalms intended for churches with a lot of echoes are slow.

Figure 1 Odd Trandem: The brain behind theprototype development. Here he is skiing with whateventually ended up becoming a GSM terminal. I ambehind him outside the picture with car batteries

ISSN 0085-7130 © Telenor ASA 2004

Page 164: 100th Anniversary Issue: Perspectives in telecommunications

162 Telektronikk 3.2004

Committee works with no result?Around 1984 we started developing hardware toprove our ideas. At the same time Audestad intro-duced us to some very enthusiastic French scientistsfrom CNET, a PTT lab outside Paris. At that timethey were working with Germany towards a German/French system. The idea was that when Germany andFrance finally agreed, the rest of Europe would fol-low. They were keen to start working with us. Theirprincipal idea was “Slow Frequency Hopping” to ran-domize interference and to provide diversity to avoiddeep signal fades. Shortly after, the “Groupe SpécialMobile” (GSM) organization was born and all theNordic FMK staff put into a similar GSM organiza-

tion. The work was assigned to three WorkingGroups: 1 Services, 2 Modulation and coding andaccess system, 3 Protocols. I participated in WG2.Around 18 European monopoly PTT operators fromdifferent countries eventually participated under theauspices of CEPT. The work in WG2 was slow, butmany ideas from nations were presented. Even if wewere supposed to propose a modulation and codingscheme for radio access, everybody had their ownfavorite ideas and it was hard to reach an agreement.The slow progress in WG2 delayed the work in theother groups. The European Commission threatenedto take over the control of the work in GSM unlessprogress was made. Something needed to be done.

Figure 2 Eight different prototype modems were offered for the Paris trials. Among these were Philips, Ericsson, Mobira (Nokia)[From Communications System Worldwide, Sept 1986]

Overview of GSM experimental systems

Developers ATR, SAT,

SEL, AEG ANT, Bosch, Philips/TRT Televerket Elab,

Italtel Telettra LCT Mats-D Ericsson Sweden Mobira Trondheim

System CD-900 S-900-D SFH-900 (MS-BS) (BS-MS) DMS-90 MAX ADPM

wideband narrowband nb./wb. narrowband wideband narrowband narrowband narrowband

Multiple access TD/CDMA TDMA CD/TDMA FDMA CD/TDMA TDMA TDMA TDMA TDMA

Traffic rate (kbps) 12.8+3.2 9.6+1.4 16+25.6 16+2 16+2 16+8 16+4.6 16+8 16

Signallling rate 0.8+1.6 0.25 10.4 0.75 0.75 1+0.67 1.6+0.5 0.5+0.5 –

Multiplexed rate 18.4 11.25 52 18.75 18.75 25.67 22.7 25 –

Channels per carrier 63 10 3 1 64 10 11 9 10-160

TDMA-factor per carrier 63 10 3 1 4 10 11 9 10-160

CDMA-factor per carrier 1 1 1 1 8 1 1 1 1

Multi access rate (ksymb/s) 1496.25 128 201 19.5 39 340 302 252 256-4096

Modulation rate (kbaud) 3990 128 201 19.5 1248 340 302 252 256-4096

Gross bit rate (kbps) 7980 256 201 19.5 19968 340 302 252 256-4096

Carrier spacing (kHz) 6000 250 150 25 1250 300 300 250 200-4000

Modulation QPSK 4-CPFSK GMSK GTFM QAM GMSK GMSK GMSK DPM

Pulse type/filtering 1SRC 2AC 0.30 gss gss – 0.25 gss 0.50 gss 0.25 gss SRC

FM/PM PM FM FM FM PM FM FM FM PM

Symbol alphabet 4 4 2 2 4 2 2 2 2

Modulation index 0.50 0.33 0.50 0.50 0.50 0.50 0.50 0.50 1.20

Double sided bandwidth (kHz)

• -20 dB – 130 200 20 2000 370 300 270 –

• -70 dB – 260 600 50 2500 810 1000 600 –

Gross symbol duration (µs) 0.251 15.6 4.98 51.3 0.801 2.94 3.31 3.96 –

Time slot duration (ms) 0.4762 3.1875 1.3333 16 4 0.8 1 1.77 –

Speech frame duration (ms) 15 32 12 16 16 – – – –

M acc frame duration (ms) 30 32 240 16 16 8 11 16

M acc frame length (gr symb) 119700 2048 48240 312 19968 2720 3322 4032 –

Frequency hopping rate (s-1) – – 250 – – 125 – – –

Diversity BS – yes – yes – – yes yes –

Diversity MS – – – – – – yes yes –

Speech coding SBC RELP SBC RELP RELP – – – –

Channel coding RS – RS RS comp RS – RS –

ISSN 0085-7130 © Telenor ASA 2004

Page 165: 100th Anniversary Issue: Perspectives in telecommunications

163Telektronikk 3.2004

The need for a championship to getfurtherAfter two years and no real firm decisions betweenvarious proposals, WG2 decided to invite nations toimplement prototype modems and to conduct com-parative measurements. Initially The Netherlandsoffered to host a measurement campaign, but verysoon CNET in Paris offered superior laboratory facil-ities for the measurements. We agreed upon how tochoose the best modem: The modem which is able tohandle most traffic within a limited frequency band,considering interference from surrounding base sta-tion reusing the frequencies. For this purpose a set ofpropagation channels were selected. The neat thingabout this criterion was that it considered both themodem’s ability to withstand interference and noiseand made it important to use little bandwidth. Themost important channel was called “Typically Urban”and was selected after a lot of propagation experi-ments throughout Europe under the auspices of theEU in the group called COST 207.

Help from SwedenEven if the Swedish Televerket had their competingmodem candidate, they realized the threat from thecompeting consortia in ending up with somethingwhich would not be advantageous to Sweden. Theytherefore decided to help us in Norway with fieldmeasurements. They provided a chauffeur and a carwith measurement facilities in Stockholm. The mea-surements were a great success since they confirmedmy calculation that there must be an optimum bit ratefor a certain terrain [1]. It turned out that 512 kbit/soutperformed 256 and 1024 kbit/s. For low bit rates,the likelihood of losing the signal was greater. If thebit rate was too high, a practical modem would not beable to handle the complexity. In Figure 3 this wouldcorrespond to a too small window. In between therewas an optimum!

The Paris field trials, preparing forthe championshipsThe Paris field trials were subject to a lot of attention.In the journal Communications System Worldwidefrom September 1986, Figure 2 was presented alongwith a description of the various proposals. Aboutthe Norwegian proposal was written:

“Finally in this brief overview, some mentionshould be made of the system proposed by Elab ofTrondheim, Norway. This was apparently designedin the behest of the Norwegian PTT, which wantedto be seen to be making some contribution to GSM.

Since Elab is attached to a Technical Institute andhas no manufacturing capability, it would appear tohave little chance of success. Nevertheless, HansWerner Lawrenz of the Deutche Bundespostdescribes it as a ‘very interesting approach’.

It uses a system called ‘adaptive digital phase mod-ulation’, the basic idea of which is to cater for thedifference between operating in large cities and inrural areas, and thereby increase the capacity of thesystem. Thus in cities, a lot of channels per carrierwith a short coverage range and small cells areused. ‘It’s a good combination of coverage andbandwidth,’ comments Lawrenz.”

During the Nordic Seminar on Digital Communication14–16 October 1986 in Stockholm, William Gosslingat The Plessey Company Plc gave a keynote talk:

“What we are faced with now is a technologybreakpoint in mobile radio. The true significance ofsecond generation digital cellular is that it givesEurope not only a technically superior solution tothe mobile telephony problem, but also the oppor-

Figure 3 The upper curve is the window inside whichthe channel response (lower curve) must fit since thatpart outside will disturb. The modem is constantlytrying to minimize the energy outside. The picturewas taken at Skeppsbron in Stockholm in 1986. Thewindow is around 4 bits at a rate of 512 kbit/s. Whenthe bit rate is incresed, the window becomes shorterand there will be more disturbance from multipathpropagation. When the bit rate is reduced, the win-dow will be longer, but the components in the chan-nel response less resolvable, which may cause thesignal to disappear (fading)

ISSN 0085-7130 © Telenor ASA 2004

Page 166: 100th Anniversary Issue: Perspectives in telecommunications

164 Telektronikk 3.2004

tunity to create a world competitive supportingindustrial structure.

It is time for our men of affairs to make the mostpositive outcome for Europe to be achieved. Let usbe in no doubt that we already have a distinguishedrecord of success. European achievements withAirbus Industrie, EUTELSAT, Arienne, Jaguar andTornado is beyond debate. Now the time has comeagain.

Results from the championshipsin Paris

By Christmas of 1986 one of Europe’s largest-evermulti-company research and development pro-grammes successfully passed a critical phase ofevaluation in Paris. This was accomplished withthe usual drama of cancelled leave, overtime work-ing and ruined fingernails that is common to suchendeavour. I refer, of course, to the Paris field tri-als, designed to evaluate the various proposals fora European digital cellular radio standard.”

In February 1987 the New Scientist (Figure 4) wrote:

“Cars equipped with the various technologies onoffer were driven around Paris. The winner was asurprise- it was a system called ELAB, that wasdeveloped not by a large company but by Trond-heim University in Norway.”

The best modem proposed by Germany and Francefrom Standard Electric Lorenz (SEL) coined “Wide-band” became the candidate of France and Germanywhile all the other countries wanted the Norwegianproposal which was called “Narrow band”. Aftersome negotiation, our system was chosen for furtheroptimization. Soon after, GSM decided on a commonstandard based upon our ideas described above, andthe bit rate was set to 270 kbit/s. This choice of bitrate would ensure that the modem could work wellin cities as well as in rural areas with long echoes.

After this, efforts to make a Norwegian mobileindustry were started, but this is another story [5,6].

References1 Maseng, T. On Selection of Systems Bit-rate in

the Mobile Multipath Channel. IEEE Trans. onVehicular Techn., VT-36 (2), 51–54, 1987.

2 Maseng, T. Digitally Phase Modulated (DPM)Signals. IEEE Trans. on Comm., 911–918, Sept.1985.

3 Maseng, T. The Power Spectrum of digital FM asproduced by digital circuits. Signal Processing,Amsterdam, Elsevier-North Holland, 253–261,Dec. 1985.

4 Maseng, T, Trandem, O. Adaptive Digital PhaseModulation. Second Nordic Seminar on DigitalLand Mobile Radio Communication, 14–16 Oct.1986, Paper no. 19.

5 Sand, G, Skretting, K. Fortellinger om Forskning.Trondheim, Tapir, 2002.

6 Gulowsen, J. Bro mellom vitenskap og teknologi.Trondheim, Tapir, 2002.

Figure 4 The Paris field trials [From New Scientist, Feb 1987]

Professor Torleiv Maseng is Director of Research at the Norwegian Defence Research Establishment where

he is responsible for communications and information systems. He worked as a scientist at SINTEF in Trond-

heim for ten years where he was involved in design and standardization of GSM. For seven years he was a

scientist at the NC3A NATO research center in The Hague. During 1992-94 he was involved in the start-up

of the new private mobile operator NetCom GSM in Norway where he had technical responsibility. Since

1994 he holds a chair in radio communications at the University of Lund in Sweden. In 1996 he took up his

employment at the Norwegian Defence Research Establishment (FFI) located at Kjeller outside Oslo. He is

the author of more than 150 papers, holds patents and is a Technical Editor of the IEEE Communications

Magazine. He has received an award for outstanding research and has arranged large international

conferences.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 167: 100th Anniversary Issue: Perspectives in telecommunications

165

My entry into the GSM workDuring my studies at the Technical University inTrondheim in the mid 1980s I got in touch with Tor-leiv Maseng, who was highly involved in workingout a radio system candidate for the new pan-Euro-pean mobile system that was being developed withinCEPT. Maseng convinced me that mobile communi-cations was a thrilling area where lots of developmentwould be ongoing for years.

A guest lecture given at the University by Jan Aud-estad further convinced me. So, thanks to these twogentlemen, Torleiv Maseng became my tutor andTelenor R&D became my future employer. Sincethen my work has been dedicated to research anddevelopment of mobile communications systemsand services.

During the spring of 1986 I simulated the behaviourof a digital mobile radio system exposed to variousradio propagation conditions as modelled by theCOST 207 group (European Co-operation in the fieldof Scientific and Technical Research, COST action207: “Digital Land Mobile Radio Communications”),trying to figure out an optimum bit rate for a futuremobile system that had to combat or exploit multi-path propagation. The simulation work was reallyto predict the performance of the radio modem thatMaseng and his companions were designing for theforthcoming “world championship in mobile radio”,which is more comprehensively described by Masengin another article in this issue of Telektronikk.

In my diploma thesis carried out during the autumnof 1986, my task was to analytically calculate the fre-quency reuse distance in a cellular mobile radio sys-tem. Mobile communications since the introductionof the NMT system in 1981 had become so popularand widespread that it was quite clear that a comingmobile system would be limited by interference from

other users rather than by the link budget. Hence,tight re-use of radio frequencies was a critical issue toincrease the overall capacity in the system. Luckily,my analytical calculations of frequency re-use inGSM aligned very well with the simulations carriedout by people studying the GSM radio subsystemwithin CEPT.

My student work had introduced me to the work thatwas ongoing throughout Western Europe to definea common standardised system for mobile commu-nications. Hence, the next step when I entered Tele-verkets Forskningsinstitutt (now Telenor R&D) wasto join the standardisation group on the radio sub-system. Then I realised that what I had so far beeninvolved in was only a microscopic fraction of thedesign to be undertaken to establish a new standardfor mobile communications.

Organisation of the work on theGSM radio subsystemIn 1987 when I entered the GSM working party 2,Alain Maloberti from CNET in France chaired thegroup.

The group was further divided into three workinggroups: one dealing with channel organisation, oneon modulation aspects, chaired by Torleiv Maseng,and one on handover and cell reselection issues. Theworking party consisted of a lot of skillful and alsocolourful people, aiming to pave the way for a digitalmobile radio system to provide Western Europe witha modern system for mobile communications.

The overall design criteria for GSM was, amongstothers, that the system should be at least as spectrumefficient as the analogue systems, and that the speechquality should be comparable to or better than thespeech quality of NMT 900.

GSM Working Party 2

– Towards a radio sub-system for GSM

R U N E H A R A L D R Æ K K E N

Rune Harald

Rækken is

Senior Research

Scientist in

Telenor R&D

Telektronikk 3.2004

The task of designing GSM was a complex issue requiring extended co-operation by several players.

The overall design criteria for GSM was, amongst others, that the system should be at least as

spectrum efficient as the analogue systems, and that the speech quality should be comparable to

or better than that of NMT 900. The work on the radio sub-system involved computer simulations

to evaluate the advantages and drawbacks of the different candidates, and the goal was always to

optimise the solutions. However, the final solutions often turned out to be sub-optimal due to the

system complexity. Another lesson learned is also that standardisation is sometimes about finding

the best technical solution, and sometimes it is just about politics.

ISSN 0085-7130 © Telenor ASA 2004

Page 168: 100th Anniversary Issue: Perspectives in telecommunications

166 Telektronikk 3.2004

To figure out what candidate could be most spectrumefficient and best performing was the main task dur-ing the Paris tests in the autumn of 1986. This wasthe most controversial issue that had to be solvedwith regard to the GSM radio sub-system. So –when the issue of wideband versus narrowband trans-mission scheme was solved, it was just “peace andharmony” in the radio subsystem work.

It is quite clear that designing GSM was a complexissue requiring extended co-operation by severalplayers. As the development of GSM went on, hugeparts of the system proposals were subject to com-puter simulations to simulate and evaluate the advan-tages and drawbacks of the different proposals. Thesimulations were always carried out with the goal ofoptimising the solutions, but often ending up withsolutions that were sub-optimal. And of course, somediscussions have been political rather than purelytechnical, as will always be the case in internationalco-operations.

My interest was at that time mainly in modulation,channel coding and equalisation aspects, so I joinedthat working group.

GSM radio sub-systemThe protocols and mobility handling of GSM(“Global System for Mobile communications”) aredescribed elsewhere in this article. Here I will givea brief overview of the radio sub-system of GSM.

Frequency bands and accesstechniqueGSM utilises TDMA (Time Division MultipleAccess). Several users share the same carrier fre-quency, but transmit on separate time slots. Further,the mobiles are transmitting and receiving on differ-

ent radio frequencies, utilising frequency divisionduplex (FDD).

Originally, the following frequency bands were allo-cated to GSM:

• 890 – 915 MHz for uplink (mobile transmitting,base station receiving);

• 935 – 960 MHz for downlink (base station trans-mitting, mobile station receiving);

giving a duplex separation of 45 MHz. In GSM thechannel spacing is 200 kHz, giving room for 124carriers in the available bandwidth.

Later on, there have been extensions of the GSMband, amongst others in the 1800 and 1900 MHzfrequency range.

Generation of coded data blocksGSM is designed for transmission of digitized infor-mation. As an example of how bits are organised intoblocks for transmission over the radio interface, let’stake a look at the full rate (13 kbit/s) speech channel.

Transmission of digitized speech is always a trade-offbetween maintaining a subjectively acceptable speechquality and offering the required spectrum efficiencyand overall system capacity. The speech coding algo-rithm in GSM is called RPE-LTP (Regular PulseExcitation – Long Term Prediction), and the workthat led to this solution is described by Jon EmilNatvig in another article in this issue of Telektronikk.

The bit rate from the speech encoder is 13 kbit/s,resulting in a data block of 260 bits every 20 millisec-ond. Not all of these bits are equally vulnerable to biterrors, measured in terms of subjective speech qual-

Figure 1 Radio frequencies and time slots in GSM, initial frequency band

12

34

56

78

1 2 3 4 12

34

56

78

123 124 1 2 3 4 123 124

45 MHz

890 MHz 935 MHz 960 MHz

MS transmits MS receives

200 KHz

915 MHz

guard band

ISSN 0085-7130 © Telenor ASA 2004

Page 169: 100th Anniversary Issue: Perspectives in telecommunications

167Telektronikk 3.2004

ity. Hence, unequal error protection is used, givingstrongest protection to the information bits beingmost vulnerable to transmission errors.

How then was this idea figured out?

Well, there were lots of simulations of error patternsthat might result due to various propagation condi-tions and interference that GSM might be exposed to.The performance of the radio subsystem with differ-ent error protecting methods was simulated usingthose different error patterns, and people performedsubjective listening tests to evaluate which codingscheme gave the best subjective speech performance.

Hence, the 260 bits of each speech encoder block isdivided into three classes:

• Class 1A: the 50 most significant bits in each blockis coded with a shortened (53,50,2) block code.Errors in these bits will significantly decrease thesubjective speech quality. If errors are detectedamong these bits the Bad Frame Indication (BFI) isset, and the previous speech frame is repeated;

• Class 1B: 132 bits;

• The 182 class 1 bits together with 3 parity checkbits and 4 tail bits are encoded with a half rateconvolutional code;

• Class 2: the 78 least vulnerable bits from thespeech frame is sent uncoded over the radiointerface.

The encoding of a 260 bits speech frame into a 456bits encoded data block is schematically shown inFigure 2.

For other logical channels other channel codingschemes are used. The result is always a data block of456 bits (exception is the random access burst). Theseblocks of 456 encoded data bits are divided into sub-blocks of length 57 bits, being used to construct thedata bursts that are transmitted over the radio channel.

The GSM transmission systemIn GSM three different methods are used for forwarderror correction (FEC) to protect the transmitted dataagainst transmission errors; that is, block encoding,convolutional encoding, and interleaving.

Utilising block codes, parity check symbols are addedto the data symbols during the encoding operation.Block codes are capable of correcting data errorsindependent of the error position within the data

block, meaning that block codes are well suited forcorrecting bursts of errors that typically occur on amobile radio link.

Convolutional codes, on the contrary, operate on bitstreams. The convolutional code applied in GSM hasa constraint length K = 5, meaning that during theencoding process the bit at the input of the encoderis taken into consideration, in addition to 4 precedingbits.

What a convolutional code really does is spread theeffect of one bit by introducing intersymbol interfer-ence. A convolutional code thus introduces timediversity, hence the channel noise is averaged of Kdata symbols. This is the mechanism giving errorcorrection properties.

It is worth noting that this is very similar to the ideasexploited by Maseng et al. when they built their can-didate modem for the “world championship in mobiletelephony”. It was also thought during the standardis-ation process that the same Viterbi decoder could beused both for resolving intersymbol interference andfor decoding the convolutional code. This is probablythe reason why it was clearly stated that GSM shouldbe able to cope with intersymbol interference spreadover four bit intervals.

A convolutional code performs badly when there arebursts of errors. If however the bit errors are “ran-domised” in position, a convolutional code can cor-rect errors even with rather high bit error rates. It istherefore of fundamental importance to introducemechanisms to spread bit errors, particularly due tothe fact that errors on a mobile radio channel has atendency to occur in bursts.

Interleaving is such a “whitening process” whichspreads bursts of errors to optimise the decoding of

50 (1 A) 132 (1 B) 78 (2)

parity

checktail

50 3 132 4

K = 5, half rate convolutional code

378 78

Figure 2 Forward error correction for full rate speech channel

ISSN 0085-7130 © Telenor ASA 2004

Page 170: 100th Anniversary Issue: Perspectives in telecommunications

168 Telektronikk 3.2004

the convolutional decoder. Interleaving performs bet-ter the larger the interleaving depth is (the more thebits in a block are spread) because the de-correlationbetween successive bits increases with increasinginterleaving depths. The drawback is that increasinginterleaving depth also leads to increasing decodingdelay, hence there is always a trade-off between per-formance and delay due to the interleaving process.

For the non transparent data channels and the sig-nalling channels, there is in addition to the FECmechanisms also a layer 2 protocol using retransmis-sion (ARQ) of data blocks still being erroneous afterthe FEC.

MultiplexingThe data bursts being transmitted over the air inter-face in GSM consists of information bits and a knownbit pattern (midamble – which is a training sequence)that is being used for synchronisation and also to esti-mate the impulse response of the radio channel toutilise knowledge of the radio propagation conditionsin the decoding process (soft decoding).

The bit rate (burst bit rate) over the GSM radio inter-face is 270.833 kbit/s, and the burst length is hence0.577 ms. The length of the normal burst is 148 bitintervals, in addition to a guard space of 8.25 bitintervals. 8 such data bursts are multiplexed into aTDMA frame of length 4.615 ms. The normal burstis used for most purposes in GSM.

ModulationThe modulation method used in GSM is GaussianMinimum Shift Keying (GMSK) with a time-band-width product of 0.3. This is a partial response modu-lation with constant envelope, meaning that all theinformation is in the phase of the received signal.A time-bandwidth product of 0.3 means that a pulseshaping filter is used that introduces intersymbolinterference in the transmitted bit stream to obtaina smaller modulation spectrum.

In the recent extension of GSM, EDGE (EnhancedData rates for Global Evolution) the main differencefrom “plain” GSM is that the modulation method 8-PSK is used instead of GMSK, offering 3 bits/symbolinstead of 1 bit/symbol in GMSK. This paves the wayfor higher bit rates in EDGE than in GSM.

The narrowband transmission scheme designed byMaseng et al. outperformed the other system candi-dates during the Paris tests in 1986. This modem usedadaptive digital phase modulation (ADPM). ADPMhowever was replaced by GMSK rather quickly afterthe Paris tests. With today’s distance to this change ofmodulation method, it should be revealed that thereplacement of ADPM was not solely founded intechnical arguments, even though it was claimed thatGMSK was a better known modulation methodwhich also allowed a smaller modulation spectrumthan ADPM. The choice of GMSK was primarily apolitical compromise to motivate France and Ger-many to accept the narrowband GSM solution thatthe rest of the GSM players had accepted. It shouldalso be emphasised that we are very satisfied with thesolutions that the GSM radio transmission system isbased on.

It was during this discussion on modulation methodsthat I as a newcomer to this field realised that stan-dardisation is sometimes about finding the best tech-nical solution, and sometimes it is just about politics.

LiteratureLangewellpott, U, Rainer, M. Modulation, Codingand Performance. In: Digital Cellular Radio Confer-ence (DCRC), Hagen, Germany, 1988.

Outer

block

code

Inner

convolutional

code

Info bits

from source

(speed, data,

signalling)

Block-

decoder

Convolutional

decoder

Demodulation

equalizer

decryption

Info bits

to receiver

Channel

(+ noise,

interference,

multipath)

Encryption

modulation

Deinter-

leaving

Inter-

leaving

Forward

error control

Figure 3 Structure of forward error control in GSM

Encrypted data (58)Training-

sequence (26)Encrypted data (58) 33

Guard

(8.25)

0 1 2 3 4 5 6 7 TDMA-frame

Figure 4 Normal burst and TDMA frame. The length of the componentsis given in number of bit intervals (3.69 µs)

ISSN 0085-7130 © Telenor ASA 2004

Page 171: 100th Anniversary Issue: Perspectives in telecommunications

169Telektronikk 3.2004

Maseng, T. Wideband or Narrowband? World cham-pionship in mobile radio in Paris 1986. Telektronikk,100 (3), 161–164, 2004. (This issue)

Natvig, J E. The 1987 European speech coding cham-pionship. Telektronikk, 100 (3), 182–186, 2004. (Thisissue)

Ochsner, H. Overview of the Radio Subsystem. In:Digital Cellular Radio Conference (DCRC), Hagen,Germany, 1988.

Rækken, R H. Feilkorrigerende koding i GSM (Errorcorrecting codes used in GSM). Kjeller, TelenorR&D, 1993. (R&D note U 1/93)

Rune Harald Rækken (42) received his MSc degree from the Norwegian Institute of Technology (NTH) in

Trondheim in 1986 and has since then been employed by Telenor R&D. In 1987 he started to work on the

GSM specifications, with focus on modulation, equalisation and channel coding. Later on, he has worked on

radiowave propagation measurements and modelling. Recently he has been focusing on UMTS and IEEE

802 standards for wireless communications. He is now a senior research scientist working in the mobile

systems group at Telenor R&D.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 172: 100th Anniversary Issue: Perspectives in telecommunications

170 Telektronikk 3.2004

Interconnecting operators: The three sausage modelThis is the story of how the Mobile Application Part(MAP) was created and the problems and conflictsthat this development caused between the communi-ties of enthusiastic mobile communications expertsand the conservative and preservative telephoneswitching engineers.

The idea to design an international land mobile com-munications system came up in CCITT1) StudyGroup XI (Switching) in late 1980, more than twoyears before GSM was established. The first docu-ment describing how mobility was achieved in theNMT system was presented to the Plenary Assemblyof CCITT in 1981. The document was appended toa proposal for a new study question on land mobilesystems. The authors were Bjørn Løken and myself.The Study Group found the proposal so interestingthat they appointed Bjørn Løken as Interim Rappor-teur so that work on land mobile systems could com-mence immediately without awaiting the approval ofthe Plenary Assembly.

From mid 1981 the work on land mobile systems inCCITT took off at a violent speed and importantresults were obtained early. The three most importantresults obtained during the early years were a simplenetwork model nicknamed the “tree sausage” model,a method for non-interrupt handover of calls betweendifferent switching centres, and the outline of a proto-col supporting mobility within and between PublicLand Mobile Networks (PLMN). The latter was laterchristened the Mobile Application Part (MAP). Theseachievements were all adopted and developed furtherby the GSM group.

Figure 1 shows the “three sausage” model of a landmobile system. The sausages are the two PLMNs andthe fixed network. The significance of this model isthat it is entirely abstract; that is, independent of theparticular physical design of the network. The modelcan be used to describe all major activities takingplace in the system and it is general enough to applyto every practical configuration of a mobile system.This allowed MAP to be developed independentlyof a particular network architecture: MAP is thus notspecific for GSM.

The PLMN represents an entity of network owner-ship. The interaction between PLMNs is thus a co-operation of different mobile network operatorsallowing mobile subscribers to roam between net-works independently of ownership and subscription.

The model shows two PLMNs together with the fixednetwork. MAP supports mobility between the PLMNs,that is, MAP offers to mobile terminals the capabilityto move from one PLMN to another PLMN retainingthe capability to receive and make calls. Interconnec-tivity between a mobile terminal and a fixed terminalor between two mobile terminals takes place acrossthe fixed network. The signalling protocol connectingthe PLMN to the fixed network is Signalling SystemNumber 7 (SS No 7). The plan was to implement SSNo 7 in the telephone network during the 1980s sowhen an international mobile system was ready foroperation around 1990, this would be the naturalchoice of interconnection method. When GSM wasput in operation in 1991/92, SS No 7 was in place andmobile communications could commence immediately.

The Mobile Application Part (MAP) of GSM

J A N A A U D E S T A D

Jan A. Audestad

is Senior Adviser

in Telenor

This is the story of how the Mobile Application Part was developed. MAP is the neural system inter-

connecting the distributed computer infrastructure of GSM (VLRs, MSCs, HLRs and other entities). The

work on MAP started as a study of the general architecture of mobile systems in ITU two years before

the GSM group was established. In 1985 the work was taken over and later completed by the GSM

group. MAP was the first protocol of its kind in telecommunications systems. After long and difficult

negotiations with the community of switching and signalling experts, MAP got its final structure in

1988. This included the use of Signalling System No. 7 as carrier and the support of services such as

roaming, call handling, non-interruptive handover, remote switching management and management

of security functions. MAP is one of the real technological triumphs of GSM.

1) The Consultative Committee on International Telephony and Telegraphy of the International Telecommunications Union (ITU).

ISSN 0085-7130 © Telenor ASA 2004

Page 173: 100th Anniversary Issue: Perspectives in telecommunications

171Telektronikk 3.2004

PLMN architecture: The heritage of NMTAnother development that took place prior to GSMwas the specification of an internal architecture ofland mobile systems as shown in Figure 2. The archi-tecture is based on the principles first developed forNMT and were adjusted for use in more flexible con-figurations by the CCITT.

The architecture is simple. In addition to the radioinfrastructure (base stations and cells) and the tele-phone exchanges switching the calls between the basestations and the fixed network (MSC), there are twotypes of databases: VLR (Visitor Location Register)and HLR (Home Location Register)2). The VLRs arein charge of a location area. Within this area themobile stations can roam without updating theirlocation3). Location updating takes place when themobile station roams from one location area toanother. The VLR contains all information aboutthe mobile terminals currently in its location arearequired for establishing calls to and from these ter-minals. The VLR also controls the switching processin the MSC. In this respect, the VLR is an intelligentnetwork (IN) node similar to the concept developedindependently by Bellcore during the 1980s for serv-ing telephone calls requiring centralised control (rout-ing and payment management of free-phone and pre-mium rate calls, and management of distributedqueues and helpdesk functions). In mobile networks,identity management, location updating, routingadministration and handling of supplementary ser-vices require the support of similar procedures asintelligent networks. Therefore, it is not surprisingthat the two groups came up with rather similar solu-tions to the problem of remote control of switchingprocesses independently of each other.

The HLR is a database containing subscription infor-mation (services and capabilities) and the currentlocation of the mobile terminal. There is usually oneHLR for each GSM operator4).

In addition, there are also other databases and entitiesnot important for the basic architecture (equipmentregisters, authentication key escrows, voicemail sys-tems and short message centres). These databases arealso connected to the other network elements by MAP.

There are permanent MAP connections between theVLR and the MSCs it is controlling. There are spo-radic MAP connections between the VLR and theHLR for location updating, between two or threeMSCs for handover, and between two VLRs formanagement of identities during location updating.Sporadic means that a relationship is established onlywhen two such entities must exchange informationand controls.

Figure 2 Architecture of GSM

2) These names were inspired by NMT: HMX for home mobile exchange and VMX for visiting mobile exchange offering similarfunctions as the HLR and the VLR, respectively.

3) A VLR may control a number of location areas. Roaming between such location areas only require updating of the VLR and not theHLR.

4) If there are many mobile subscribers, two or more HLR databases may be required. Such configurations are regarded as a single,distributed HLR.

Fixed network

PLMNPLMN

Mobility

MAP

Connectivity

SS No 7

Figure 1 The three sausage model

MSC

Fixed network

Subscription

Current

location area

Previous

location area

Sporadic MAP

connection

Permanent MAP

connection

Call control

HLR

VLR

VLR

MSC

ISSN 0085-7130 © Telenor ASA 2004

Page 174: 100th Anniversary Issue: Perspectives in telecommunications

172 Telektronikk 3.2004

Many of the operations and procedures supported byMAP were outlined in CCITT before the GSM groupwas established. The work had in fact progressed farbefore GSM took over the specification in 1985.

Cooperation between SPS/SIGand GSMUntil 1985, not many of us believed that Europewould ever be able to specify a common land mobilesystem. CEPT5) had this far never succeeded in itsstandardisation efforts so there was no reason tobelieve that it would succeed this time either. At theGSM meeting in Berlin in October that year, this atti-tude changed. The core team of the GSM group hadnow become tightly consolidated and it was decidedunanimously that every effort should be made to makeGSM a success. An outline of 110 recommendationsthat should comprise the complete GSM specificationwas drafted at the meeting so that work could com-mence on solving concrete problems. Amongst theserecommendations was the MAP specification.

This was the kick-off the GSM group needed.

CEPT already had a working group dealing withsignalling, SPS/SIG. This group was asked to takecharge of the MAP specification. Christian Vernhes,who had worked on MAP right from the beginningin CCITT, became chairman of the team in charge of

the specification. In addition, the team consisted ofBruno Chatras (ASN.1 specification) and myself(procedures). Alfred Karner joined the team in theautumn of 1988 cleaning up the formal aspects of thespecification.

The request from the GSM group was to deliver acomplete specification by January 1989. SPS/SIGmet this requirement. Both the syntax and the seman-tics of the protocol had then been methodically testedand verified by Siemens. This included testing theASN.1 syntax of the protocol and all procedures thatwere written in SDL.

Protocol structureThe final structure of the protocol is shown in Figure3. GSM is a distributed system where most of theprocedures are divided into components located indifferent computers, for example, in the mobile termi-nal, in one or two VLRs and in the HLR for locationupdating. A handover between cells connected to dif-ferent MSCs involves simultaneous execution of soft-ware components in the mobile terminal, in two basestation tranceivers, two base station controllers, twoor more MSCs and one VLR. The components in theMSCs and the VLR are interconnected by MAP. Theother components are interconnected by other proto-cols.

MAP is the core element in the network architectureof GSM, tying together MSCs, VLRs, HLRs andother specialised databases and equipment. Theconfiguration of the protocol is shown in Figure 3.

The MAP semantics corresponds to the exchange ofprocedure calls between the components of the dis-tributed application procedure as shown in the figure.The syntax and the semantics of these procedure callsare mapped onto the protocol via a set of finite statemachines in order to transport the commands acrossthe network. The machines not only map the syntaxand the semantics but also take care of software syn-chronisation, ensure that the right software compo-nents are bound to each other during the whole trans-action, and detect and respond to exception condi-tions. The state machines ensure that the abstractsyntax of MAP can be interfaced to any applicationlayer protocol. MAP can in fact be interfaced withany application layer protocol supporting procedurecalls, for example RPC (Remote Procedure Call) ofthe Internet.

5) Conférence Européenne des Postes et des Télécommunications. The part of CEPT in charge of telecommunications standardisationlater became ETSI (European Telecommunications Standards Institute).

Component

of application

procedure

Procedure call

(MAP semantics)

Finite state

machines

TCAP

SCCP

MTP

Signalling System No 7

MAP syntax

RO syntax/TCAP syntax

Component

of application

procedure

Finite state

machines

Figure 3 MAP protocol

ISSN 0085-7130 © Telenor ASA 2004

Page 175: 100th Anniversary Issue: Perspectives in telecommunications

173Telektronikk 3.2004

The protocol stack on which MAP was finally imple-mented is shown in Figure 3. The stack consistsentirely of sub-protocols of SS No 7: TCAP (Trans-action Capabilities Application Part), SCCP (Sig-nalling Connection Control Part), and MTP (MessageTransfer Part).

In search of a suitable protocolstackThe reliability of MAP must be as good as intelecommunications networks at large. This meansthat the maximum outage of the protocol should notexceed 15 minutes per year which is the standard reli-ability requirement for telephone exchanges, telecom-munications links and signalling systems. This corre-sponds to the ultrahigh availability of 99.997 % – thatis, the system must function properly 99.997 % of thetime. Therefore, the stringent requirement on avail-ability puts constraints on the choice of protocol stackfor supporting MAP: all layers in the stack must be atleast as good as this.

Another important requirement is the time budget asshown in Figure 4. The time budget has to do withpsychology. The time from a telephone call is initi-ated and until the called user is connected (i.e. activa-tion of the ringing tone) must not be more than one ortwo seconds; otherwise the calling user may prema-turely clear the call because a few seconds is a verylong time when waiting for the ringing tone. The totaltime must be divided among a large number of pro-cessing events allowing 100 milliseconds or less foreach of these processes. In the GSM system, such anevent may be information exchange between the VLRand the HLR during call establishment. On averagethe exchange of messages, including processing timeat each end of the connection, should not requiremore than 100 milliseconds on average and less than200 milliseconds for 90 % of the calls. Broadlyspeaking, this allows for typically 30 millisecondsone-way delay (5000 km) and 20 milliseconds pro-cessing time. The delay includes processing time atany intermediate node such as router for directing thesignalling information.

The availability requirement and the timing constraintput severe restrictions on the choice of protocol stack.

Only three protocol stacks suitable for transfer ofMAP messages were available in the mid 1980s.These are the Internet suite of protocols, the protocolsof public packet switched data networks (X.25), andSignalling System No 7. One problem was that thesestacks were not even complete; that is, they did notcontain all layers that were required for supportingthe application protocol.

The most complete protocol stack was RPC overTCP/IP. MAP fitted nicely into the RPC format, TCPensured end-to-end integrity and IP was able to routethe messages across the network. Around 1985, theInternet was only used as a research network. The net-work was owned by no-one and there was no formalorganisation in charge of its evolution. The reliabilityand quality of service of the network was unspecifiedexcept that messages were delivered on a best effortbasis, that is, there were no specification concerninghow much time it would take to transfer a messageacross the network. The Internet as such was there-fore not suitable as a carrier of information requiringreal time operation and ultra-high availability.

Nevertheless, one possibility we had was to apply theInternet technology between MSCs, VLRs and HLRs– not the Internet itself. Exploitation of the Internettechnology offered us one possible, though not thewanted, solution. We kept this as the last resort solu-tion all the time up to the summer of 1988.

Another possibility was to use the X.25 data network.A complex stack of application protocols, the OpenSystem Interconnection (OSI) suite, was specified byCCITT and ISO during the 1980s. The problem withthe X.25 network was that it did not contain secureenough mechanisms for autonomously restartingoperation after a major link failure. These had to bebuilt into the specification before X.25 could be usedfor high reliability transfer of information. Further-more, the network supporting MAP had to be a dedi-cated network for MAP only because otherwise othertraffic could cause freeze-out or excessive delays.X.25 did not offer more than what the Internet tech-nology did for a fraction of the cost. X.25 was there-fore never a serious alternative.

Figure 4 Time constraints

VLR HLR

Delay

Processing time

Processing time

Delay

Total time

ISSN 0085-7130 © Telenor ASA 2004

Page 176: 100th Anniversary Issue: Perspectives in telecommunications

174 Telektronikk 3.2004

The solution we wanted was Signalling System No 7because this protocol was designed for extreme re-liability and real time operation. However, the sig-nalling system did not support higher layer protocolsensuring end-to-end integrity. The basic protocoldesigned for operation between telephone exchangesconsisted only of a data link layer and a simple net-work layer that could only be used on a very simplenetwork infrastructure. This is the Message TransferPart (MTP). In order to convert the signalling systeminto a data network supporting flexible routing, theSignalling Connection Control Part (SCCP) wasdeveloped. This is essentially a data network combin-ing the features of IP and X.25, offering both connec-tion-oriented and connectionless operation. However,the SCCP had a separate connectionless categoryoffering guarantee of delivery. The signalling systemoffered neither a transport protocol nor applicationlayer protocols. We needed these protocol layers inorder to ensure proper end-to-end control of themessages.

We requested the signalling subgroup of Study GroupXI already in 1984 to specify such protocol layers butour request was ignored. It came as a surprise whenwe received a question from the same group in 1986asking us to provide them with the set of require-ments we needed to implement MAP. The group hadstarted specifying an application layer protocol calledthe Transaction Capabilities Applications Part (TCAP).We learned much later that the reason this was donewas not to satisfy our needs but to support the devel-opment of the intelligent network in the USA.

This time things also went wrong because StudyGroup XI specified a protocol that was adequate forthe simple intelligent network protocols but was fartoo simple for a complex protocol like MAP. We hadto make some additions to the specification before itcould be used.

Transaction Capabilities (TCAP): The best we gotTCAP consists of two layers: the component sub-layer supporting the Remote Operations (RO) proto-col and the transaction sub-layer protocol. RO is aremote procedure call protocol standardised as part ofthe OSI development. It allows two application enti-ties to exchange invoke commands requesting that aremote task is executed, return of the results of theexecution and provision of error codes if things gowrong.

The transaction layer is a connection-oriented proto-col where the information generated by RO is sent inBegin, Continue and End messages supporting obvi-ous functions. The protocol offers multiplexing sinceseveral RO messages can be sent in one transactionmessage.

However, it turned out that TCAP was too simple fora protocol as complex as MAP. TCAP did not offerappropriate mechanisms for controlled establishmentand release of the connection, explicit binding of theapplication modules, activity management, recoverymanagement and software synchronisation. It neveroccurred to Study Group XI that there were goodreasons for the complex application layer protocolsdefined by ISO for general data communications:association control, session management, concur-rency management and reliable transfer of informa-tion.

The functions needed for proper operation of theGSM system (association control, explicit binding,session management, and recovery) were included inthe MAP specification itself, making it more complexand less universal than necessary.

Alignment with the fixed network:Much hindsight, little foresightOne of the major problems we faced was the align-ment of GSM and the fixed network. The generalopinion of the fixed network community, includingthe market analysts of the telecommunications opera-tors, was that mobile communications would remaina small service as compared to fixed telephony in allfuture. Statistics from the Nordic countries showing asteady and tremendous growth of mobile communica-tions was brushed aside with the argument that thiswas just a transient that would soon be over.

This attitude was global, where the strongest oppo-nents were the North-American telecommunicationsoperators for which mobile communications repre-sented a real competitive threat. Mobile communica-tions was a way in which new companies could per-forate the monopolies of the seven RBOCs6) – nick-named the Baby Bells – that AT&T was sliced intoduring the divestiture of AT&T in 1984.

The small but growing community of people workingon mobile systems had come to the opposite conclu-sion: the design of handheld mobile terminals hadbecome feasible, moving the mobile phone out of thecar and onto the pavement. The potential market formobile communications was then just as big as the

6) Regional Bell Operating Company.

ISSN 0085-7130 © Telenor ASA 2004

Page 177: 100th Anniversary Issue: Perspectives in telecommunications

175Telektronikk 3.2004

market for fixed telephony, or perhaps even bigger:telecommunications was about to enter a new erawhere flexibility and ubiquity would become themost important drivers of market growth.

One undisputable requirement that we had to acceptright from the beginning was that the GSM systemshould not have any impact on the specifications ofthe fixed network. This included signalling, routingalgorithms, numbering and number analysis, switch-ing functions and network architecture.

We may regret this attitude now.

Let us look at some problems that this decision causednot just for the GSM system but for telecommunica-tions in general.

GSM uses non-geographic numbers. This means thatthere is no relationship whatsoever between the tele-phone number of a mobile station and where thatstation is located. The number only identifies wherein the network the subscription information and thecurrent location of the subscriber can be found. Wehad to invent new methods by which we could handlethe routing of calls to mobile terminals since this wasthe first time that the problem of non-geographicrouting occurred in real systems.

For several years the MAP team and the GSM groupproposed that the routing algorithms of the telephonenetwork should be altered allowing a look-ahead pro-cedure in MAP by which the location of the calledmobile subscriber was first found and then the callwas sent directly to the destination. This procedurewas proposed in order to avoid “tromboning”; that is,to avoid that a call from an Italian to a Norwegian onvacation in Italy is set up all the way from Italy toNorway and then back again to Italy. The applicationof a look-ahead procedure would ensure that the callcould be established on a short connection withinItaly. At this time, the look-ahead procedure wasregarded as a feature solely required in GSM. How-ever, if implemented, the look-ahead procedurewould have solved several routing problems such asnumber portability, allocation of personal numbers,role and time dependent routing of calls, and selec-tion of the shortest and cheapest route for voicemailcalls, free-phone calls, calls to distributed helpdesks,calls to premium rate services, and so on. It wouldalso have enabled a complete integration of GSM andintelligent network services.

The feature was neither accepted by the CCITT norby ETSI. This decision resulted in many problemslater as the use of non-geographic numbers hasbecome the rule rather than the exception.

Another problem was concerned with segmentationof messages in the SCCP. No such service wasincluded in the original specification of the SCCPsince the opinion was that segmentation of longmessages had to be done by the application using theSCCP. It took us several years and numerous meet-ings to persuade the designers of the SCCP thatsegmentation at the application layer is impossiblebecause there is no simple way in which the length ofapplication messages can be determined. The reasonis that the application protocol is written in anabstract syntax requiring compilation in exactly thesame way as any program written on the computer.The compilation of the messages takes place at theinterfaced between TCAP and the SCCP and it isonly there that the exact bit pattern of the messageis known. The length of a message then depends onboth the syntax of the MAP message and the formatof TCAP: the bit pattern is different for the sameMAP message if it is sent in a Begin or a Continueformat. The length and content of the bit patterndepend on such a subtle detail as the invoke numberof the RO message and the transaction number of thetransaction message. Segmentation must then eithertake place in TCAP after compilation or in the SCCPbefore the message is sent. The message must bereassembled before it is delivered to the de-compila-tion process of TCAP wrapping up the transactionmessages and the remote operations in order to pickout the MAP message.

The task to make the signalling experts understandthis subtle detail was difficult but in the end theSCCP was amended in order to support segmentation.Otherwise we had been forced to resort to solutionsbased on TCP/IP.

A final problem was concerned with addressing.SCCP offered three addressing schemes: the pointcodes identifying exchanges, subsystem numberintended to identify a specific box or function at thesignalling interface, and the global title which is noth-ing but a telephone number identifying the termina-tion. The point code is not globally unique and cannotbe used for MAP; MAP required so many subsystemnumbers in order to identify the different functionsand boxes that too many of the available codes wereused up. We could therefore only use the global title.This seems trivial but it is not.

In data communications, several addresses are usuallyneeded in order to identify the process that needs tobe invoked. The global title identifies only the VLR,HLR, MSC or whatever other type of equipment isfound at the termination. The global title does notidentify each of the several functions that the VLR isin charge of. Noting that the VLR may not be a single

ISSN 0085-7130 © Telenor ASA 2004

Page 178: 100th Anniversary Issue: Perspectives in telecommunications

176 Telektronikk 3.2004

computer but a complex of many computers makesthe problem even bigger. This means that the finaldestination of a TCAP message – the applicationmodule that shall perform the actual task – can onlybe identified from addressing information containedin the transaction messages, the remote operationsmessages or the MAP messages.

In order to understand that this is a problem, let uslook at how this feature is solved in TCP/IP.

Broadly speaking, the IP number identifies the com-puter. The header of IP contains a field indicatingwhich type of protocol is contained in the informationfield of the packet, e.g. TCP or UDP. This allows thecomputer to select the correct software for analysingthe content of the information field. Similarly, TCP(or UDP) contains an address called the port number.The port number identifies a computer program thatis capable of analysing the application messageembedded in the information field of the TCP data-gram. Behind each port there is then a particular soft-ware that is able to interpret the format of the mes-sage (http, email, mpeg and so on). Based on theanalysis of addresses and other information includedin the application message the computer can thenfinally start the program that can actually handle thecontent of the message.

When the SCCP and TCAP were specified, this wayof handling computer interactions was obviously notunderstood properly. Otherwise, a transport layerwould have been introduced just in order to handleaddresses beyond simple routing addresses in com-plex computer systems. A protocol as simple as UDPwould have done the trick.

Without the support of a transport layer protocol andsupporting application layer protocols, MAP reliestoo much on Signalling System No 7 and became lessflexible than we hoped when we started developing it.

In the end a successMAP was the first application protocol designed forsignalling purposes. The first version of it was sentfor approval of the GSM group in February 1989.The work on the second version had already starteda couple of months earlier. In less than one year thesecond version was finalised, in time for implementa-tion in the first GSM network. Version 1 was neverimplemented. The first version contained more than650 pages including appendices and supplementarymaterial; the second and third versions contain about750 and 1000 pages, respectively.

The first version consisted of 54 individual proce-dures. The newer versions are just a little bigger.

It took much time to develop the first version ofMAP. The reason was that there was no similar pro-tocol that we could use as template. In fact, we hadto make all the mistakes ourselves in order to learnhow to do it. The ASN.1 encoding of the protocol,for instance, was rewritten several times before weunderstood how to write it. I think we misinterpretedthe ASN.1 language in all the ways this possiblycould be done. Finally, Alfred Karner showed us howto do it.

The application protocols developed later for intelli-gent networks, supplementary services in the tele-phone network, universal personal telecommunica-tions and UMTS are all based on the methods andprinciples developed in MAP. Some of the membersof the MAP team were invited to various groups inCCITT and ETSI to explain how to interpret the ASN.1and SDL specifications and apply these languages inprotocol design.

The development of the MAP protocol was indeedpioneering work.

For a presentation of the author, turn to page 54.

ISSN 0085-7130 © Telenor ASA 2004

Page 179: 100th Anniversary Issue: Perspectives in telecommunications

177

How did it start?After the original GSM group had established thethree first subgroups (or “Working Parties” whichwere the official wording), the responsibility of allsignalling was placed in WP3 “Signalling and Net-work Aspects”. The whole idea was to define precisespecifications for a set of interfaces between logicalgroupings of functions. One very obvious interfacewas the air interface, which also constitutes the bor-der between the subscriber’s equipment and the net-work. The usual working procedure was then, as now,to appoint an editor for a defined recommendation.The responsibility of an editor is typically to draft afirst version and to include additions and changesbased on written input documents and meeting dis-cussions.

Dr. Woldemar Fuhrmann, then working for the Ger-man incumbent operator DeTeCon was appointed asthe editor of the core specification of the GSM radiointerface Layer 3. Dr. Fuhrmann was a very skilledyoung man with experience from data communica-tions and science, and in addition he had an artistictalent for drawing, together these characteristics madehim the best possible choice for the challenging task.

The first version of 04.08, which became the officialGSM recommendation number for the Layer 3 speci-fication for mobile-network signalling, was presentedfor WP3 early 1986. It was already then a compre-hensive document with a number of SDL (SystemDescription-Language) diagrams of the protocolbehaviour included. It soon became clear that theamount of work and discussions needed to finalisethis area was too large to be handled in the WP3alone. There was simply too little available timewithin the meetings, and too long period between themeetings to get the work done. It was then decided toestablish a subgroup chaired by the editor in order tospeed up the process. The group was given thedescribing name: Layer 3 Expert Group, or L3EG for

short. After the first meeting in Paris in August 1986with four persons present, a very intense workingperiod involving around 10–15 people for a period ofapproximately two years followed before the recom-mendation was considered stable.

The task and its challengesTo understand the discussions and results from WP3and L3EG we must remember the situation in thetelecom world at the time. ISDN had been the subjectfor definitions already for some years, but no imple-mentation had yet been done. SSNo7, the first reallydigital and datacom-inspired signalling system wasjust in its introduction phase, and data communica-tions was mainly handled in stand-alone dedicatednetworks (either circuit-switched or packet switched).The common vision for the future telecom networkswas based on ISDN and SSNo7, both of them beinginspired by the datacom world through the ISO OSI-model (Open System Interconnection), describingfunctions belonging to separate independent protocollayers. It was early decided that GSM should be amobile extension of the ISDN network; i.e. it shouldencompass as much as possible of the ISDN featuresand functions, but still make efficient use the radiospectrum. The mobile switches (MSC) should looklike normal ISDN switches on their external inter-faces, and the subscriber connections (i.e. the radiopath) should logically be as close as possible to theISDN subscriber lines.

Inspired by the OSI model and the ISDN subscriberinterface specifications the need for three distinctprotocol layers was identified:

• Layer 1 (the physical layer), closely related to thephysical medium, with the responsibility of map-ping the protocol units to the physical channels,encoding/decoding, power control etc. in order toconvey the information over the radio channel.

Signalling over the radio path

K N U T E R I K W A L T E R

Knut Erik Walter

is Senior Tech-

nical Adviser in

Telenor R&D

Telektronikk 3.2004

“Mobile to network signalling” – it doesn’t sound very exciting, does it? Nevertheless, it is a fact that

this area is the very central part of the GSM system since it reflects the logic (or intelligence?) of the

functionality that makes a GSM mobile with its SIM-card the powerful communication tool that we

know. It contains a number of different protocol levels and procedures that perform the necessary

housekeeping which together allows the user of a mobile station to concentrate entirely on the services.

This article contains a short description of the protocol levels and main procedures, and a personal

impression of the story of their invention.

ISSN 0085-7130 © Telenor ASA 2004

Page 180: 100th Anniversary Issue: Perspectives in telecommunications

178 Telektronikk 3.2004

• Layer 2 (the data-link layer), with its main func-tions in error correction and to segment longermessages into suitable units for transfer by Layer 1(and re-assemble them at the receiving end).

• Layer 3 (the network layer), whose main functionsare to provide the actual GSM services (speechcalls, data calls, SMS, roaming ...).

Just as ISDN, GSM was (and still is) mainly a circuitswitched network, i.e. in order for information to besent between two users, resources constituting a“channel” are established and maintained for theduration of the call. This also makes it meaningful todistinguish between a Signalling Plane (used for allcontrol functions related to establish, maintain andrelease the information path) and a User Plane defin-ing the information path itself. The procedures anddesign of the protocols described here all relate to thesignalling plane.

According to the OSI model principles the protocolswere defined by specifying the actual PDU (ProtocolData Units) to be sent between the protocol entitiesbelonging to the same layers. On the mobile side themapping between the logical protocol entity and thephysical equipment was trivial, everything mustreside in the same unit. On the network side it wasmore complicated. Layer 1 was obviously terminatedin the Base Transceiver station (BTS), and the samewas decided for Layer 2, since this was also specifi-cally designed for the radio path. The Layer 3 sig-nalling however is partly terminated in the BTS,partly in the BSC (Base Station Controller), andpartly in the MSC. This reflects the fact that the con-cept of protocol entities of different layers is just apractical model of the system, and care was taken notto put much constraint on the actual implementation.While the actual format of the information to be senton a physical interface needs to be standardised, theinternal design should be left to the implementers.The task of the GSM standardisation group was todefine interfaces and interconnection principles, notimplementation of physical units.

With the OSI model and ISDN specifications present,why was it such a great task to define the equivalentversion for GSM? The answer lays in two simplefacts related to mobile communications:

• The physical connection is not permanently avail-able, and its characteristics are in addition highlyvariable.

• The subscribers move freely around between thenetwork access points.

The Layer 3 functions and the searchfor an appropriate description modelAs described above, a number of additional require-ments are placed on the design of a mobile systemcompared to a traditional circuit switched telecom-munication network. Let us look a little closer at theconsequences of the two areas of new requirements:

The first one is related to the fact that radio is used asa communication medium. Since radio frequenciesare a scarce resource, no system for public use canallow resources to be permanently allocated to everyuser (contrary to the last mile of traditional tele-phony). A mobile system therefore must includefunctions to manage the frequency resources accord-ing to the user’s needs. Methods for access control,mobile station identification and channel allocationare crucial. In addition, functions are needed to adjustand control the resources when the user moves duringan active communication. This includes functions likepower control, timing adjustments and handover be-tween base stations. The term “Radio Resource man-agement” (MM) was given to this group of functions.

The other area that imposed challenges is due to thecore of a mobile system, the user‘s mobility. Sincethe system cannot use the access point as an implicitidentification of the current user, procedures must bedefined in order to verify the real identity before anyaccess to the network resources can be granted. Simi-larly, there must be mechanisms for the network tolocate and connect users when needed, e.g. in con-junction with incoming calls. This functionality areawas named “Mobility Management” (RR), andincludes functions like identity management, authen-tication and location updating.

In the ISDN subscriber line Layer 3 specification(CCITT Q931) is used a logical protocol model forthe terminal side and network side respectively. In themodel the protocol entities are defined as finite statemachines, i.e. only a limited set of defined states areallowed, and moving between states is triggered byinternal or external signals. (Internal signals are typi-cally timer expiries, and external signals are protocolmessages received from the peer entity). The ISDNspecification contains definitions of the states relatedto call set-up, call modification and call release, all ofthem relevant to GSM. An example of the states andtransitions relevant to call set-up is shown in Figure 1.

The first version of GSM Layer 3 specification triedto encompass the RR and MM functionality in thesame protocol entity, i.e. the state machine had to geta number of new defined states reflecting the appro-priate status of RR and MM relations (channel con-nected, location updated and others). It soon became

ISSN 0085-7130 © Telenor ASA 2004

Page 181: 100th Anniversary Issue: Perspectives in telecommunications

179Telektronikk 3.2004

clear that the result was too complex; there were sim-ply too many variables to handle and still ensure thatthe result was a stable protocol. The L3EG thereforedecided to divide Layer 3 into three logical entities:Call Control, Mobility Management, and RadioResource Management. Each of them could then bedescribed with a manageable number of states andallowed transitions, and with a set of protocol proce-dures per unit. The challenge still remained, however,how to describe their internal relations? Long discus-sions both in the meetings, during lunches and in darkpubs in late evenings kept the L3EG delegates busyfor months. A simple proposal was to model the threelogical entities in parallel, with all of them connectedto a “coordinating entity”. It turned out that this didnot imply any simplification, since the coordinatingentity had to reflect the different states of the threelogical entities, i.e. we were back to the starting point.Then a solution was proposed by the Swedish dele-gates, which was as ingenious as it was simple: theRR, MM and CC entities were modelled as sublayerswith RR at the bottom, MM in the middle, and CC aspart of a new “Connection Management” – CM entityon the top. The normal OSI principle of a higher(sub)layer governing a lower one was applied, andthen suddenly the need for a coordinating functionwas removed. Applying this model really accelerated

the progress because the definition of the three partscould be made in parallel. The model introduced thenew concept “MM-connection” which is the serviceneeded before a call (or a short message transfer) canbe initiated. An “MM-connection” needs an “RR con-nection”, which is simply a new name for channelresources allocated. The concept of MM-connectionrequired one new Layer 3 message, but except fromthis the sublayering is not reflected in additionaloverheads with separate protocol unit headers. Mes-sages belonging to the different logical units were alldefined in the same logical space, but with differentprotocol identifiers. Figure 2 shows the resultingmodel that was decided to be included in a separatespecification document (04.07: Layer 3 interfaceprinciples) for information purposes.

The Radio Resource management,the border line between radio en-gineering and software engineeringIn addition to the protocol modelling, the RadioResource management was the area that caused mostdiscussion during the intensive design phase back in1986–87. From the WP2 group (Radio aspects) a num-ber of physical limitations were defined, their focuswas naturally efficiency. On the other hand WP3 was

Release

request

Disconnect

indication

States

3,4,7,8,9,10

Disconnect

request

Call int

No call

proceeding

Release

request

Call

delivered

Active

Call

received

Connect

request

MT call

confirmed

Call

present

Null0

1

3

419

10

19

12

6

9

7

8

11

MNCC-setup-ind

MNCC-rej-req

MNCC-rel-

req

MNCC-

disc

ind

MNCC-

rel-

ind

MNCC-rej-ind

MNCC-setup-req

MNCC-disc-

req

MNCC-

disc

in

MNCC-call-

conf-req

MNCC-

setup

rsp

Alert-

req

MNCC-

setup-rsp

MNCC-setup-

compl-ind

MNCC-

disc

ind

MNCC-

disc-

req

MNCC-setup-comp-ind

MNCC-setup-

cnf

MNCC-

setup

cnf

MNCC-

alert

in

MNCC-rel-

req

MNCC-

progress-

ind

MNCC-call

proc-ind

MNCC-

rel-

CNF

MNCC-

facility

ind

Any state

except 0Any state

MNCC-

rel-

ind

0

MNCC-

facility

req

Any state

except 0.19

Figure 1 Service graph of Call Control entity – MS side (from 3GPP 24.007)

ISSN 0085-7130 © Telenor ASA 2004

Page 182: 100th Anniversary Issue: Perspectives in telecommunications

180 Telektronikk 3.2004

always stressing flexibility for future changes andextensions, which often meant additional protocolfields or functions. Related to the protocol hierarchy,WP2 was responsible for Layer 1, while WP3 wasresponsible for Layer 2 and Layer 3. It might be just acoincidence that RR was logically defined as being apart of Layer 3, since what it really does is to controlLayer 1. It is therefore not surprising that a new body,the Layer 1 Expert Group, was established in order todefine the details on the border of the responsibilityareas of WP2 and WP3. Channel access and channelcontrol were the main areas where discussions weremade. For a long period the working assumption of thesystem was that signalling mainly should be performed

on the so-called Common Control Channels (CCCH).Lots of proposals for the access control functions forthese channels were made, and here the conflictbetween efficiency and flexibility was most apparent.Late 1987 the “four big ones”, i.e. the German, British,French and Italian operators made a common proposalto include a concept of dedicated signalling channels.Now the use of CCCH could be reduced to a very shortinitial access, and the bit consumption in the later sig-nalling phases was less critical.

The RR management protocol defines this initialaccess procedure as well as all transmission of thelogical information from the network to the mobiles.It also contains the paging procedure and the hand-over procedure.

As already mentioned it is closely related to the phys-ical layer, and as a consequence it has later beenmoved to a separate specification.

The Mobility Management – keepingtrack of and identifying the users ofthe systemUsers of a mobile system are of course mobile, theywant the services to be available to them whereverand whenever they want. Moreover, they want to beaccessible when roaming both nationally and interna-tionally. The feature of international roaming fromday 1 was probably the most important reason for thetremendous success that the GSM system has experi-enced. Requirements were clear, and the networkstructure of the existing analogue networks (espe-cially NMT) was adopted. A key principle is to dis-tinguish between mobiles currently in use, and others.While the network knows exactly the whereabouts ofa mobile active in a call, it is sufficient that it knowswhere to search for idle mobiles, e.g. at incomingcalls. It is obviously not possible to search in real-time all over a countrywide network (or even worse,in all networks where roaming is allowed); on theother hand it is a waste of resources to keep track ofevery cell change for mobiles currently not in use.The concept of Location Area is the applied solution,being defined as a number of cells for which a mobilemay roam without the need to update any registra-tions in the network. In GSM as in NMT the task ofinforming the network that the location area of a par-ticular mobile has changed, is placed on the mobileitself. This core procedure is the main task of the MMpart of the L3 specification. In addition to the basicprocedure that is triggered when the mobile noticesthat a new area is entered, functions for periodicupdating as well as local indication of power off andon is included. The purpose of these additions is toimprove the service and network efficiency.

Mobile

network

service

MNCC-SAP MNSS-SAP MNSMS-SAP

CC SS SMS

MNSS-SAP

MNSMS-SAP

T1 T2 T3CC SS SMSMMMM

MNCC-SAP

PD

MMREG-SAP

PD

RR RR

RR

SAPI 0SAPI 3

RA

CH

SD

CC

H

SA

CC

H

FAC

CH

SD

CC

H

SA

CC

H

AG

CH

-P

CH

BC

CH

La

ye

r 3

sig

na

llin

g

Figure 2 Mobile side protocol model (from 3GPP 24.007)

ISSN 0085-7130 © Telenor ASA 2004

Page 183: 100th Anniversary Issue: Perspectives in telecommunications

181Telektronikk 3.2004

At the time of design most of the discussions in theMM area were related to the different security func-tions that have to be included. L3EG here receivedrequirements from WP1 (Service aspects) and fromthe Security Expert Group defining the actual algo-rithms and the information that had to be transferred.Authentication of a valid subscription and encryptionof the transferred information are the main functionsinvolving MM signalling.

In short the role of the MM (and its underlying RR,L2 and L1) was defined to be provision of a secureconnectivity between an identified subscriber unit sothat applications can access it in a similar way as overa physical line.

The Connection Management –where the user services are handledThe sublayer model adopted identified three differentprotocol entities belonging to the CM layer:• Circuit Switched Call Control• Supplementary Services Control• Short Message Transfer

L3EG was responsible only for the first one, the callcontrol procedures for mobile originated and mobileterminated calls. Here the ISDN procedures wereused to a very large extent, but several messages hadto be reduced in size due to the limitations of the sig-nalling channel. One example of this is the reductionfrom 8 to 4 bits per digit in address informationfields, quite natural since no ISDN numbers so farhave included anything else than numerical values.Another example is the removal of the possibility toinclude user-to-user information in the SETUP mes-sage. In addition all mobile-specific service elementshad to be designed, i.e. identification of the differentchannel modes, speech codecs and so one.

One specific challenge that caused some attention wasthe information from the speech codec experts thatDTMF tones were not possible to carry through theselected speech codec. A “workaround” consisting ofsignalling messages was then designed, simply basedon the principle to instruct the MSC to start and stopthe tones according to the user’s touch on the keypad.

All in all, the design of the call control part of the Layer3 signalling did not cause large discussions when theRR and MM peculiarities were out of the way.

Remote operations, building blocks,structured procedures As already mentioned, care was taken to give asmuch freedom as possible for the implementationphase. The specification therefore nearly only con-sists of so-called elementary procedures, showing asignal sent and the possible allowed responses. Thiswas inspired by the remote operations principles,which was a hot topic in fixed networks at the time.So while state machine models, SDL diagrams etcwere extremely useful during the specification pro-cess, they were deliberately removed in order not toconstrain the implementations. In order do implementa service, e.g. establish a circuit switched call, theelementary procedures must be combined in a struc-tured way, but this is not specified. The mobile sta-tion was specified mainly to be a slave to the networkin this respect, a principle that was needed to allowequipment from several vendors to be used simulta-neously in the infrastructure provided by one manu-facturer. The structuring of elementary proceduresinto complete signalling sequences is implementedin the different switching manufacturers’ software.

The work proved to be successfuland future-proof!Looking back to the design phase of GSM one thingis clear: the flexibility that was built into the protocolwas worth all its costs! It has made improvementsand completely new services possible, but still allow-ing for early equipment to work and use their originalservice set. New speech codecs, higher data rates,packet switched services (GPRS), everything fits intothe space provided by the original protocol design.The basic functionality is also now extended toencompass UMTS, where the simple basic MM (andextended with GPRS MM-GMM) still is the basis forthe logics of the system. Moreover, the design phasegave a group of people, including myself, the chanceof building a knowledge platform that has shown tobe of great value over the years.

Knut Erik Walter (50) graduated as Siv.Ing. from the Norwegian Institute of Technology (NTH) in Trondheim

in 1982. He was employed by Telenor R&D, where he was heavily involved in the GSM specification work.

He was Research Manager for the mobile systems in Telenor R&D from 1990 to 1993. Knut Erik Walter

joined Telenor Mobil when GSM was introduced in 1993 and held various positions on the technical

product development and strategy side. He rejoined Telenor R&D in 2003 as a senior technical adviser.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 184: 100th Anniversary Issue: Perspectives in telecommunications

182 Telektronikk 3.2004

All this was unproven technology at the time and animportant part of the preparatory work was to con-duct a number of studies and tests to verify the feasi-bility of a digital system. These studies were a signif-icant basis for the system design. The so-called “Pariscompetition” which compared different alternativesfor the radio transmission – leading to the selection ofthe “narrow-band” TDMA radio access method – iswell publicised as described in [2]. The radio accessstudy however was itself done under the non-provenassumption that satisfactory speech transmissioncould be achieved at a bit rate around 16 kbit/s.

In 1985–86, the state-of-the-art in digital coding ofspeech had reached a level that made this assumptionquite reasonable. At that time, ITU-CCITT hadstandardised speech coding at 32 kbit/s for telephonyspeech (ITU-G.721) and at 64 kbit/s for wideband(7 kHz) speech (ITU-G.722). Speech coding at16 kbit/s and below was still a research subject eventhough computer simulations had demonstratedpromising speech quality results. It was howeverunknown what quality could be obtained in theadverse bit error conditions expected in mobile radio.This fundamental issue was addressed in a less well-known but equally important study: the GSM speechcoder selection process, which is the subject of thispaper.

Creation of SCEGThe “Speech Coding Experts Group” (SCEG) was setup in October 1985 to deal with the speech process-ing issues of GSM. SCEG was not formally a sub-group of GSM but had to be set up under CEPT TM3,which was formally responsible for speech process-ing issues in the CEPT committee structure. In prac-tice, however, SCEG acted as a normal GSM work-ing group but had to report progress in both fora. Thetask given to SCEG was two-fold:

1) To demonstrate that a digital solution with speechcoding at around 16 kbit/s would provide a speechquality at least as good as analogue systems;

2) Given that 1) could be fulfilled, SCEG shouldselect and specify a speech coder for the GSMsystem.

At the plenary meeting of GSM in Madeira in Febru-ary 1987, which is best known for the controversyover the radio access issue, SCEG could confirm thata digital solution would indeed offer superior speechQoS compared to analogue solutions. SCEG also pro-posed a compromise solution combining featuresfrom the two best codecs tested as the candidate forfurther testing and standardisation.

The SCEG work programmeThe ultimate measure of speech quality must relate tothe quality perception of live listeners. While usingautomated quality evaluation techniques have becomepopular lately, no such techniques were available in1985–87, so speech quality had to be determined “thehard way”, in subjective tests involving listeners. Thestandard for subjective speech quality assessment oftelecommunication systems is the Mean OpinionScore test, or MOS for short. Listeners judge thespeech quality by listening to recorded speech sam-ples and assign scores on a five point scale: from(poor) to (excellent). The scores are assigned anumeric value from 1 to 5 and the scores are aver-aged over the listener group and the resulting valueis the Mean Opinion Score, or MOS.

The initial stages of the GSM speech codec standardi-sation effort therefore consisted of the preparation ofa set of basic requirements and the organisation of alarge multi-national subjective test programme tocompare candidate codecs with each other and withthese basic requirements.

Basic requirements and rules

At the outset, more than 20 candidate codecs wereproposed. The group therefore quickly agreed on a setof rules for the selection process:

1) Each country could only contribute one codec pro-posal. Several countries therefore had to carry out

The 1987 European Speech Coding Championship

J O N E M I L N A T V I G

Jon Emil Natvig

is Senior

Research

Scientist in

Telenor R&D

From the outset the fundamental working assumption was that the GSM system would be based on

digital transmission [1]. It was believed that digital technology would not only enable a more efficient

management of scarce frequency resources, it would also provide high speech quality and advanced

features such as security and data communications.

ISSN 0085-7130 © Telenor ASA 2004

Page 185: 100th Anniversary Issue: Perspectives in telecommunications

183Telektronikk 3.2004

national “qualification” rounds to arrive at onesingle candidate. SCEG was not involved in thisprocess.

2) All codec proposals had to be submitted as hard-ware laboratory implementations operating in realtime, no simulations were allowed. This require-ment was meant to ensure a certain maturity of thecandidate codecs – and a real commitment fromthe respective proponents.

The following basic requirements for candidatecodecs were evaluated (details are given in [3]):

Gross bit rate

Bit rate is a less straightforward attribute than onemight think. Basically, a higher bit rate should pro-duce better speech quality for a given algorithm.However, in a mobile environment, we have to takeinto account bit errors. Using some of the availablebit rate for protection of sensitive information willimprove the average performance over the operatingconditions of the codec at the expense of a certainreduction of the maximum quality in error free condi-tions. In the experiments, a gross bit rate includingany error protection was fixed to 16 kbit/s.

Delay

Delay is involved in several difficult trade-off consid-erations in codec design:

a) Delay vs conversational quality: Transmissiondelay causes problems in two aspects: 1) long end-to-end delays disturb the flow of conversation, and2) reflections in the network become more audiblewith increasing delay.

b) Delay vs complexity: Because power consumptionin CMOS VLSI is almost a linear function of theclock rate of the processing machine, a relaxationof the delay requirement could make it possible todistribute the processing over a longer period witha lower clock rate, thus reducing the power con-sumption.

What matters for the user is the end-to-end delay, andthe maximum allowed codec delay contribution in acoder-decoder back-to-back configuration set to65 ms.

Complexity

One basic requirement for GSM was to allow forportable handsets so complexity and power consump-tion of the speech codec was a major concern. Thiscriterion was not quantified but was assessed andtaken into account in the selection process. In retro-spect it can be seen that this attribute was more

important than we knew at the time: recent analysis[4] have documented that more than 80 % of the pro-cessing power in the GSM transmission chain is usedby speech encoding/-decoding.

Transcodings

Medium to low bit rate speech coding removes sub-jectively redundant information. Therefore the recon-structed speech is not identical to the original speechwaveform. Feeding reconstructed speech into a newcoding/decoding stage could result in a large increasein the distortion introduced by the first codec.

The evaluation therefore included a transcoding con-dition corresponding to mobile to mobile (2 codecsin tandem).

Robustness to variations in voice spectra

and speech levels

To be useful in a public service, a speech codec needsto be able to work with a wide range of speakers;men, women, children and adults over a wide rangeof speech levels. The candidate codec tests includedmale and female voices and a dynamic range ofspeech levels of 20 dB.

Robustness to bit errors

Obviously, robustness to errors is essential in amobile radio application.

The selection tests were carried out with randomerror probabilities of 0 %, 0.1 % and 1 % represent-ing a typical range of operation.

Digital or analogue?The actual phrasing of the speech performancerequirement in GSM was:

From the subscriber’s point of view, the qualityof voice telephony in the GSM system shall be atleast as good as that achieved by the first genera-tion 900 MHz analogue systems over the range ofpractical operating conditions.

The implication of this formulation was that SCEGhad to 1) consider end-to-end transmission includingthe public telephone network, and 2) to include ananalogue 900 MHz under comparative operating con-ditions in the test programme.

The analogue reference system gave a substantiallytransparent speech quality in high C/N conditionsand outperformed all codecs in error-free conditions.However, ALL digital codecs performed better on theaverage over the range of operating conditions thanthe reference FM system [3]. Thanks to error protec-

ISSN 0085-7130 © Telenor ASA 2004

Page 186: 100th Anniversary Issue: Perspectives in telecommunications

184 Telektronikk 3.2004

tion, the quality of a digital codec maintains a rela-tively high speech quality as transmission conditionsdeteriorate (until a marked breakdown point), where-as the quality of the analogue reference decreasesmore or less linearly. This typical behaviour is illus-trated qualitatively in Figure 1.

A Solomonic decision – the RPE-LTPcodecThe comparison of the codecs were organised as alarge multi laboratory experiment. Running the testsin parallel in several laboratories, was essential forthe credibility of subjective test results. Details of theexperiments and the results are reported in [3]. Sixcandidate codec were presented at the first laboratorysession hosted by CSELT1) in Torino in 1987. In thissession some objective measurements were made andspeech material for subjective testing was recorded.Subjective tests were carried out in seven differentlaboratories in the respective native languages. Theoverall analysis (which was undertaken by TF –Televerket’s Research Institute) showed that no sin-gle codec could be declared the winner in all aspectsof the “competition”.

However the results showed clearly that two codecs,both based on pulse excited source-filter modelling,were superior to the remaining four sub-band coders.In comparing these two solutions the developmentteams from IBM/France and Philips/ Germany couldsee the possibility of several improvements. SCEGtherefore decided to allow the French and Germanteams to propose a compromise solution resulting inthe merging of MPE-LTP and RPE-LPC into theRPE-LTP full-rate GSM codec [6] (see box).

The specification of the codec is given in GSM rec-ommendation 06.10, which specifies the algorithm

down to the bit level. This allows verification ofimplementations by means of a set of digital testsequences. The specification was defined in fixedpoint arithmetics to be implementable on fixed-pointDSPs consuming less power. A public domain bitexact C-code implementation of this coder is avail-able [7].

Telenor/Televerket contributionsTF’s strong involvement in the development of GSMin general was founded on experience and competen-cies built up over more than ten years of activities inseveral research “threads” in the mobile communica-tion area, both conventional land mobile and satellitemobile communication.

TF had been studying satellite communication anddigital communication almost since it was establishedin 1967, and TF scientists were strongly involvedwhen Televerket developed the NORSAT satellitesystem to provide telecommunications to the Ekofiskoffshore oil platform in the North Sea. This was oneof the first domestic satellite systems in the world andprobably the first system worldwide to be based ondigital transmission – a bold choice at the time. Thespeech encoding used 32 kbit/s adaptive delta modu-lation – an advanced quantisation scheme rather thana speech compression method – combined withadvanced error correction. It turned out that this sys-tem provided clearly better frequency utilisation thanalternative analogue solutions based on FM.

The experience from the NORSAT system convincedTF scientists that the future of wireless mobile com-munication lay in digital transmission. In the 1970sTF was very active in ship-to-shore satellite mobilecommunication in the framework of CCIR SG 8 andin the development of the MARISAT system. In thiswork, TF proposed a digital solution following theexperiences from NORSAT, but did not succeed inconvincing the committee at this time. Digital speechtransmission for ship-to-shore systems was intro-duced later by INMARSAT (Standard B) and TF alsoplayed an active part in the studies that led to theintroduction of that system.

In all this work, TF established broad know-how insystem and network design, QoS aspects of speechtransmission, as well as subjective speech quality,which also included laboratory facilities for subjec-tive evaluation of speech codecs.

Figure 1 Analogue vs digital speech typical behavior

1) CSELT – Centro Studi e Loboratori Telecommunicazioni.

Digital

Analogue

Operational limit

Sp

ee

ch

qu

ali

ty

Carrier to noise ratio/bit error rate

ISSN 0085-7130 © Telenor ASA 2004

Page 187: 100th Anniversary Issue: Perspectives in telecommunications

185Telektronikk 3.2004

The 13 kbit/s coding scheme selected for the full-rate GSM standard is an example of a so-called analysis by

synthesis system.

The sampling rate is 8 kHz and each sample is quantized at 13 bit/sample.

The GSM full-rate codec operates on time frames of 20 ms (160 samples). This corresponds to one pitch

period for someone with a very low voice or ten periods for a very high-pitched voice. This is a very short time

during which the speech can be assumed to be stationary. The speech frame is analysed using the source-

filter model, which assumes that the speech signal is the result of a time varying linear filter excited by a

source signal. For each frame, parameters are determined for a synthesis filter that models the effect of the

vocal and nasal tracts. The short-term inverse filter flattens the spectrum envelope of the residual speech

signal.

The next step is the long-term filter, which operates on 5 ms subframes (40 bits). In the case of voiced

speech, the input signal will be a periodic impulse train where the pulses correspond to the glottal pulses.

This periodicity is removed using a long-term predictive (LTP) filter, resulting in a weak signal. If the speech

is part of an unvoiced phoneme, the residual is noisy and does not need to be transmitted precisely.

In the last step, the resulting residual signal is down-sampled by a factor three. This is done by selecting every

third sample of each 5 ms sub-frame, starting at sample 0, 1, 2 and 3, resulting in 4 candidate sequences. The

algorithm picks the sequence with maximum energy for quantisation and transmission

The decoder starts with the transmitted sub-sampled sequence padding the missing samples with zeroes.

The resulting residual pulses are fed into the long term synthesis filter which has had its parameters updated,

reconstructing the short-term residual. These impulses are then passed through the short-term synthesis

filter simulating the vocal tract and thereby producing the speech signal.

More information can be found in [6] and details are given in [10].

GSM speech coding in a nutshell

One important basis for TF’s contributions to GSMwas the close collaboration between TF and ELAB.[2] gives an account of the work in the radio area.Already in 1979 – one year before the NMT networkwas launched – TF awarded its first research contractin the speech coding area to ELAB. The task givento ELAB was to study low bit rate speech coding formobile radio applications. In the following years, aseries of contracts were awarded and a fruitful collab-

oration evolved between TF and ELAB which estab-lished a competence environment in digital speechcoding. The division of responsibilities was such thatELAB covered the work on speech algorithms andhardware implementations while TF dealt withspeech system aspects, QoS and subjective testing.

In 1983 the FMK (Future Mobile Communication)committee was established in the framework of the

LPC

synthesis

filter

Pitch

synthesis

filter

LPC

analysis

Input

speech LPC inverse

filter

Pitch inverse

filter

ADPCM

quantizer

Pitch

analysis

Residual

decoder

Up-

sampling

M

U

X

D

E

M

U

X

Grid position

Pitch parameter

LPC Parameter

LPC parameter

Grid position

Pitch parameter

Weighting

filter and RPE

grid selection

ISSN 0085-7130 © Telenor ASA 2004

Page 188: 100th Anniversary Issue: Perspectives in telecommunications

186 Telektronikk 3.2004

Nordic Council to plan for a future digital landmobile radio system to follow after NMT in theNordic countries. FMK also established a special sub-group (NR-FMK-T) to look into the speech codingissue. When SCEG was set up in 1985, the Nordiccountries had already completed a coordinated sub-jective experiment to compare possible codecs formobil radio.

A major result of the TF/ELAB collaboration [8,9]was a sub-band coder with sophisticated adaptive bitallocation as a candidate for GSM. Unfortunately,due to the very tight time schedule for the delivery ofa hardware version at the CSELT laboratory session,some error sneaked into the algorithm. Consideringthat none of the sub- and coder could match webelieve that this would not have changed the selectionanyway.

References1 Haug, T. How it all began. Telektronikk, 100 (3),

155–158, 2004. (This issue)

2 Maseng, T. Telektronikk, 100 (3), 161–164, 2004.(This issue)

3 Natvig, J E. Evaluation of six Medium Bit-RateCoders for the Pan-European Digital MobileRadio System. IEEE Journal on Selected Areas inCommunications, 6 (2), 324–331, 1988.

4 Thierry, T, Tennehouse, D. Estimating the com-putational requirements of a software GSM basestation. July 1, 2004. [online] – URL:http://citeseer.ist.psu.edu/336927.html

5 Natvig, J E, Hansen, S, de Brito, J. Speech Pro-cessing in the pan-European Digital Mobile RadioSystem (GSM) – System Overview. In: IEEEGLOBECOM 1989, November 1989.

6 Vary, P et al. Speech Codec for the EuropeanMobile Radio System. Proc. ICASSP 1988, NewYork, 227–230.

7 Degener, J. Digital Speech Compression –Putting the GSM 06.10 RPE-LTP algorithm towork. July 8, 2004. [online] – URL:http://www.ddj.com/documents/s=1012/ddj9412b

8 Ramstad, T A. Sub-band coder with a simpleadaptive bit allocation algorithm. A possible can-didate for digital mobile telephony? ICASSPParis, May 3–5, 1982.

9 Hergum, R, Perkis, A. 16 kbit/s speech coder forthe GSM system. Trondheim, ELAB, 1986.(ELAB report STF44 F86133)

10 ETSI. GSM full rate speech transcoding. SophiaAntipolis, 1992. (ETSI recommendation 06.10.)

Jon Emil Natvig (58) graduated from the Norwegian Institute of Technology (NTH) as siv.ing. in 1970 and

Dr. Ing. in 1975. He has worked with Televerket/Telenor since 1972. In the period 1975–1990 his main work

area was in speech QoS aspects, mostly related to digital voice transmission in satellite and land mobile

systems. From 1985 to 1989, he chaired the Speech Coding Experts Group responsible for selecting and

specifying the speech processing functions in the GSM system. Since 1990 his main work has been in the

field of voice user interfaces. He is currently working with the usability and accessibility issues in the

Products and Markets research program at Telenor R&D.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 189: 100th Anniversary Issue: Perspectives in telecommunications

187

Some introductory remarksIt might be appropriate to start with a warning: thereason for writing an article on ‘the birth of SMS’ isnot to reveal a 15 year old story about huge achieve-ments in terms of complex protocols and challengingcombinations of radio, data and network design. Thereader looking for that will inevitably be disap-pointed. The SMS or the ‘Short Message Service’ –as it has been labelled in every corner of GSM cover-age – is definitely one of the simplest compounds ofthe GSM system.

The main reason for writing about the creation ofSMS is because it is a story about innovation. SMSwas indeed a true newcomer. All the other services ofthe GSM system – speech, fax and all the variants ofcircuit switched data – were well-known services,copied from the fixed network, in particular ISDN.SMS, as it was defined in terms of the stable versionsof the relevant specifications, was an extremely sim-ple messaging service tailor-made for GSM. It did nothave its parallel or predecessor in any other systemfor offering mobile services to the public. The majorpart of the GSM community expected the circuitswitched data and fax services to be the most impor-tant non-voice services, and regarded SMS to bemore like an add-on that might increase the attractionof the GSM system without any commercial signifi-cance. The years to come proved it to be the otherway round.

Books have for many years been published on theEuropean mobile adventure during the last 20 years.A very good – and perhaps the most comprehensive –one is [1]. However, even the most complete volumescannot cover every task of a huge endeavour like theGSM development. The fact that it covers more ofthe SMS design work well after the SMS specifica-tion was approved than before, gave me the final pushto write some lines about just the period of time whenI took part in the design work, i.e. from 1987 to 1990.

BackgroundMobile communications of Europe and the US in themid 80s were a true wilderness in terms of technolo-gies and markets. In the area of speech servicesoffered to the public, manual systems were replacedby the first automatic ones, giving ‘mobile telephony’

almost the same approach as the regular telephonythat everybody had been used to. Paging serviceswere steadily improving – both in terms of servicesand coverage.

In addition to the systems for public services offering,enterprises or organisations were making extensiveuse of a wide area of PMR systems. In the mid 80smost of those comprised

• A relatively simple radio network of one or a fewbase stations giving radio coverage to a limitedarea on a non-cellular basis;

• Medium complex protocol stacks, some offeringanalogue speech and some data services;

• A software application running on one or severalhost computers, being the primary fundament ofthe business itself (taxi companies, dispatch busi-nesses, etc.);

• Interworking with the public networks – e.g. POTS– as a feature within a few of the systems.

At Televerkets Forskningsinstitutt – the Research &Development Department of Telenor at that time – Iworked for a couple of years (1984–1986) as the pro-ject manager of a survey called Mobile Networks forSpecial Purposes (Mobilt spesialnett). The intentionof the study was to explore the potential of mobilecommunications for other services than telephony.The survey comprised many activities – spanningfrom discussions with manufacturers and users ofPMR systems in Europe and the US via an experi-mental system set-up by Televerket and Norwegianindustry partners to specify a mobile messaging sys-tem. The system was to be connected to an X.25 net-work and provide both hosting of PMRs and exten-sion of the X.400 service to mobile terminals.Together with extensive market analysis on mobilenon-voice services in general and mobile messagingin particular the study reached a set of conclusions,from which one important was

• Offering mobile messaging within the frameworkof a public service portfolio may be a good ideasince

SMS, the strange duckling of GSM

F I N N T R O S B Y

Finn Trosby is

Senior Adviser in

Telenor Nordic

Mobile

Telektronikk 3.2004 ISSN 0085-7130 © Telenor ASA 2004

Page 190: 100th Anniversary Issue: Perspectives in telecommunications

188 Telektronikk 3.2004

- Mobile communications and messaging servicesmake a very good match, since mobile users willbe frequently out of coverage or turned off;

- Efficient store-and-forward mechanisms will berequired to make the mobile terminal the primetarget of crucial information to be delivered tothe user as soon as possible. In this respect, itwill to some extent outdate the fixed phone orthe fixed data terminal handling the email.

• Offering mobile messaging jointly to both privateand corporate segments may be a good idea since

- Unless attacking huge markets like the Ameri-can, developing mobile transport services for justa set of business applications is aiming at bank-ruptcy. Either take this development to the US ormake mobile communications for middle-sizedmarkets that may attract both the private andprofessional segments;

- Enable business viability to new services likemobile messaging by seeking opportunities tobundle those in a flexible and non-complex waywith a set of highly acknowledged services liketelephony;

- Don’t think too rigidly about new services likemobile messaging being for the corporate marketand for professional use mainly. It may rather bethe other way round; that take-up starts in themass market and even in the long term super-sedes the corporate market.

Start of work in CEPT and later ETSI

IDEG is established

In 1987 the work with the GSM specifications hadtaken some great leaps forward. The system architec-ture, the basic services, the characteristics of the radiointerface, the signalling package – they all emergedwith ever clearer contour. The GSM community haddecided to establish three different working parties:WP1 – dealing with the services, WP2 – dealing withthe radio aspects, and WP3 – dealing with the corenetwork and the signalling aspects. The main group –or as it was to be called: GSM main body – was thegroup to a) survey the progress of the whole project,b) assign tasks to the working parties, and c) approveof the solutions produced.

There was but one outstanding domain that laggedbehind: the detailed definition and specification of thedata services. The responsibility had been allocated toWP3, but that group had its hands full with the hugechallenges of establishing a complete package of allsignalling functionality that might be required inorder to fulfil the needs of the future GSM users. TheGSM main body concluded that there was a need foranother group to cater for the progress of data ser-vices definition. On May 20, 1987, the first meetingof IDEG – the Implementation of Data and TelematicServices Experts Group – was held in the city ofBonn. The group was chaired by Friedhelm Hille-brand from Detecon. IDEG had a somewhat blurredorganisational status when created, but it soonbecame apparent that it was most convenient to giveit the status of a working party, and eventually it wasrenamed WP41).

IDEG soon defined four areas that the group had toconcentrate on if it was to have a chance to catch upwith the achievements that had been reached in theother three working parties

• Rate adaptation mechanisms;

• The radio link protocol (RLP), i.e. the protocol forcarrying data at Layer 2 of the OSI model for thedata services;

• The facsimile service within GSM;

• Message handling services that might be part of theGSM service portfolio.

There was allocated a so-called drafting group foreach area2). As a matter of coincidence, I wasappointed chairman of the fourth one.

The tasks of the ‘Draft Group on Message

Handling’

GSM WP1 had left IDEG with two crucial specifica-tions: GSM 02.02 [3], an overall description of the‘bearer services’ of GSM; and GSM 02.03 [4], anoverall description of the ‘tele services’ of GSM. [3]e.g. contained a variety of circuit switched data ser-vices, of which perhaps to some extent only the asyn-chronous non-transparent 9.6 kbit/s one ever came topractical use. [4] e.g. contained the framework of aset of messaging services: the fax message service,three services on short text message conveyance, and

1) As the reader may know, naming of the GSM organisational entities within CEPT and later ETSI and 3GPP changed on severaloccasions. Thereby, e.g. WP4 became GSM4 and SMG4 in synchronism with the corresponding renaming of the other working parties.

2) A couple of additional drafting groups were also established, but on a very preliminary basis; one or two meetings only.

ISSN 0085-7130 © Telenor ASA 2004

Page 191: 100th Anniversary Issue: Perspectives in telecommunications

189Telektronikk 3.2004

a request that the GSM user should be able to accessan MHS system.

The Draft Group on Message Handling – DGMH forshort – which I was to chair, was to take responsibil-ity for the short messaging and the MHS access.

The three services on short text messages were in [4]depicted as follows

1. Short Message Point-to-Point Mobile Terminated2. Short Message Point-to-Point Mobile Originated3. Short Message Cell Broadcast

The current version of GSM 02.03 listed these threeas separate services with different level of impor-tance: 1) – which was the service of carrying a textmessage through the network to the mobile terminal –was classified as one of the high priority services inGSM. 2) – which was the service of carrying a textmessage from the mobile terminal and through thenetwork to an entity for further conveyance – shouldbe optional for a GSM PLMN operator. 3) – whichwas the service of spreading a text message on abroadcast basis to all or a sub-set of the mobile termi-nals being within radio coverage of one or severalbase stations in the network – was for further study. Itwas further emphasized that all three services shouldexploit the capacity of the signalling channels of theradio path so that they should not face congestion dueto ongoing circuit switched traffic – voice or data – ofthe mobile terminal.

In [4] of that time this was about all that was saidabout the short message services. Before the estab-lishment of IDEG there had been sketches on archi-tecture and how to accomplish Short Message Point-to-Point Mobile Terminated, but none of those docu-ments were in the pile of the officially and approvedguiding documents when the first meeting of IDEGwas opened in Bonn.

The directives of [4] for defining MHS access wereeven scarcer. It merely said that specifications shouldbe provided to allow the mobile user to exploit theservices of ‘MHS services’. It turned out quitequickly that integrating GSM and MHS did notrequire further GSM specifications. GSM users couldvery well access the User Agent of a X.400 MHS viaGSM’s own data services. A specification specifi-cally on how to access MHS from a GSM terminalmight perhaps represent a marginal improvementcompared to relying on already established standards,but DGMH did not estimate this to be sufficient forsuppliers to adopt this in their production plans. TheMHS access activities within DMGH and IDEG wereconcluded in a technical report, probably the very

first TR on data services of GSM. With that report,DGMH was allowed to leave the MHS access issue.

The objective of defining a Cell Broadcast serviceresulted in the required specifications, [6] and [9], atapproximately the same time as the point-to-pointservices were approved. However, no core networktransport mechanism was defined, and the GSM cellbroadcast service was then left with an area that hadto be based upon proprietary solutions. I think it isfair to say that both in IDEG and in DGMH, therewas some hesitation among experts on how to designthe broadcast service in a way that would be wel-comed by operators. They were not troubled by pos-sible technical problems, but rather by the feelingthat it might be hard to find a viable business case.A sparkling contrast to this type of reluctance wasdemonstrated from Racal/Vodafone’s side. It isimpossible to touch upon the work with the cellbroadcast service in those days without giving thevery enthusiastic Alan Cox full credit for cell broad-cast ever being defined. However, when the timecame to implement the GSM network and its ser-vices, the scepticism of the GSM experts had con-taminated the product development divisions of themobile operators. Few operators ever implementedcell broadcast, and hardly anyone made it a commer-cial success. The destiny of this service is interestingand should give the supporters of e.g. future MBMSsomething to consider.

For the reasons indicated, I will leave cell broadcastwith this and proceed with the mobile terminated andmobile originated service under the commonacronym by which they gradually have been identi-fied – SMS.

Items dealt with during the design ofSMS

Service aspects – IDEG is given a

considerable latitude in the SMS design

As stated above, the spring 1987 version of [4] didnot reveal much of the basic perspectives of WP1 onthe short message services. Extensive research by his-torians is outside the scope of this article, but proba-bly there have been somewhat different opinionsamong the GSM delegations on the use of a text ser-vice. The Norwegian delegation e.g. filed a contribu-tion in which it advocated the realisation of a servicefor telemetry applications, and I think other mobileoperators had put forward similar proposals, but aim-ing at slightly different applications. The text in [4] isprobably the result of their efforts to reach a consen-sus, leaving quite some freedom to the crew ofdesigners.

ISSN 0085-7130 © Telenor ASA 2004

Page 192: 100th Anniversary Issue: Perspectives in telecommunications

190 Telektronikk 3.2004

Architecture

The Service Centre – a necessary entity

The point-to-point short message service – at least themobile terminated part of it – would obviously be astore-and-forward service, since the mobile terminalmight be turned off or out of coverage at the instanceof delivery. Since it was explicitly stated that none ofthe regular network nodes of the GSM PLMN – suchas the MSC or the BSC – should offer store-and-for-ward capabilities, there had to be an extra node withsome genuine store-and-forward capabilities. So anadditional node with the somewhat generic name Ser-vice Centre (SC)3) was added to the topology ofGSM. The concept of the SC had lingered for sometime also within WP1 before IDEG came to work,however without any specific characteristics. Theprocedure of short message transfer should then beSME ⇒ SC ⇒ MS and SME ⇐ SC ⇐ MS. Theentity SME, the Short Message Entity, was whateverentity that might be connected to the SC in order tosend or receive short messages, including a GSMMS. The last step was important, because it broughtsymmetry to the service aspects of the two compo-nents point-to-point mobile terminated and point-to-point mobile originated, thereby effectively integrat-ing the two. It was therefore finally decided to com-prise the two services in one service specification,namely [5].

The debate on why and how to distinguish betweenvalue-added services (VAS) and teleservices hadbeen going on in Europe for quite some years, espe-cially in the UK with its pioneering role in bringingmarket liberalism to the European telecommunica-tions. The rigorous definition of those years impliedthat one should even say that an information streamwas subject to value adding if it was in any way con-verted – even slightly re-formatted – or stored forsome time. The issue immediately came up with theintroduction of the SC. Where should it reside, withinor without the PLMN? Due to the genuine VAS char-acter of SMS, the UK delegation strongly opposedthe first sketches of the architecture, where the SCwas included in the PLMN, which they regarded as aplatform for teleservices only. The operators of theother countries had at that time no strong opinionsand neither had the manufacturers, so it was decidedto logically locate it outside the PLMN. Since a morepragmatic view on VAS has gradually replaced theoriginal one, one could question if the architecturaldesign we chose at that time was the most feasible. Inmany countries, SMS was regarded both by operatorsand regulators as an add-on to the mobile telephony

service and consequently being part of the same mar-ket. For the forthcoming standardization 1987–1990,it however resulted in a relaxed attitude towards mak-ing a mandatory specification of the interface SC –MSC, which may definitely be ranked as a shortcom-ing of the SMS standards from those years.

Long distance SMS

Another item of discussion was how the PLMN wasto transfer the short message internally. For obviousreasons, the routing principles of a mobile terminatedshort message as well as a mobile originated shortmessage became identical to the routing principles fora speech or data call set-up. Thus, there might be along haul transfer MSC – MSC, similar to the con-nection of a telephone call within one PLMN orbetween two PLMNs. Should it be transferred bymeans of well defined mechanisms of user data trans-port like the X.25, or should one produce a certainoperation within the signalling system of GSM –MAP – especially for short message transfer? Thequestion was discussed both within DGMH andSPS/SIG, which was a body in ETSI that had respon-sibilities over a wide area of signalling tasks withinboth fixed and mobile networks. Several expertsadvocated the principal view that transfer of user datashould not be mixed with signalling functionality,and recommended X.25 for this particular undertak-ing. The UK delegation in IDEG opposed that posi-tion. The UK operators were – unlike many of theother operators that took part in the GSM project –genuine mobile operators, with no fixed or data net-work operators within the same corporation. Fromtheir business point of view, SS No 7 was a freelunch, whereas X.25 was not. After some considera-tion, it was decided to base the short message transferMSC → MSC on SS No 7 by adding an extra opera-tion ‘forward_short_message’ to the repertoire ofMAP operations. Retrospectively, users of the SMSshould be grateful to the UK delegation for contribut-ing to a correct decision, even if it might be for otherreasons than the one mentioned above: the smoothand uncomplicated interconnect and internationalroaming on SMS stems from the choice to rely on thein-house capabilities of GSM.

Defining the length of a short message

The choice of MAP as the long haul carrier of theshort message brought an end to another discussionthat had been going on for some time: how longshould the short messages be allowed to get? MAPwas based upon TCAP, and thereby on the concept ofbilateral operations, which were assumed to carrysmall weights in terms of user information. To

3) Later changed to ‘SMSC’.

ISSN 0085-7130 © Telenor ASA 2004

Page 193: 100th Anniversary Issue: Perspectives in telecommunications

191Telektronikk 3.2004

arrange for MAP operations to carry more load thanthe standard request – response sequence of TCAPallowed for would be both complex and cumbersome.But being applications based upon a signalling sys-tem of global coverage, operations within MAP orTCAP necessarily imply a substantial overhead.When analysing the ‘forward_short_message‘ opera-tion and removing the overhead, we found that therewere somewhat more than 160 characters of thealphabet chosen (see below) left for user data. It wasdecided to round down the figure to the closestdecade, and so the number 160 became the eventualsize for regular SMS. The WP1 had earlier been leav-ing 128 characters as some very tentative request forthe short message length, but it had no problems ofincreasing the limit to 160.

The short message over the radio path

But which GSM capabilities were required for send-ing or receiving the short messages over the radiointerface? The answer was pretty much given by therequirement that short messages should flow freely toor from the mobile terminal whether the terminal wasidle or busy with an ongoing call: it had to be on oneof the signalling channels. I consulted my colleagueKnut Erik Walter, who was at that time heavilyinvolved in the work with the very essential [7], if hecould take a look at what might be required in termsof specification work to cater for the SMS radio inter-face. Within a very short time he drafted [8], whichwas thereafter approved in WP3, and which I thinkstayed stable and without the need of any change fora very long time.

[8] allocates signalling channels SDCCH andSACCH according to Table 1.

[8] also allowed for the network to keep the sig-nalling resources, e.g. in periods with frequent mes-sage traffic: “… the network side may choose to keepthe channel and the acknowledged mode of operationto facilitate transfer of several short messages for orfrom the same Mobile Station. The queuing andscheduling function for this should reside in theMSC”.

Reports, Messages_Waiting and some other

features

As indicated above, most people outside DGMHseemed to regard SMS as a machine-to-person ser-vice mainly, e.g. as the main part of voice mail alerts.In that respect, there would be no need for any typeof confirmation or acknowledgement of a short mes-sage arriving at its destiny. Fortunately, the DGMHcrew appeared to have a perspective also for person-to-person messaging, and to have recognized the use-fulness of being offered information concerning if

and when the recipient actually received the message.A question arose at the stage of service definition inthe case of a mobile recipient: should the confirma-tion be given at the event of manual actions taken bythe user to display the message, or should it be givenat the event of the terminal receiving the message?Picking the second alternative was an easy choice tomake. The major challenge is to convey the messageover the radio path at a time when the mobile isturned on. When this is achieved, the chance that itwill somehow be destroyed before the user may readit is less than marginal.

From [2] I had learned that to make messaging effec-tive for mobile communications, one has to providefor functionality to make the information transfer asswift and easy, meaning e.g. as far as possible toovercome annoyance of the inherent instability of themobile terminal’s contact with the network. I there-fore proposed an additional interworking between theSC and the GSM network. When an attempt to trans-fer a short message to the mobile fails due to themobile being turned off, the location registers take anote of the event together with the address of the SCthat made the attempt. When the mobile user turns onhis phone again, the location registers – provided thatthe operator is applying IMSI Attach / IMSI Detach –are notified and in their turn informs the relevant SCthat it might be a good idea to repeat the transferattempt. The feature was labelled ‘Messages_Wait-ing’, and aimed to be particularly useful for thosewho frequently would turn off their mobiles to reducebattery consumption, attend meetings or events wheremobile phone calls were banned, etc

Other features that may be mentioned are

• Validity-Period, period that a short message storedin the SC due to absence of the receiving partyshould be kept before it might be deleted;

• Service-Centre-Time-Stamp, time when SCreceives a short message to be delivered. Always

Channel dependency Channel used

TCH not allocated SDCCH

TCH not allocated → TCH allocated SDCCH → SACCH

TCH allocated SACCH

TCH allocated → TCH not allocated SACCH → SACCH opt. SDCCH?

Table 1 The impact that traffic allocation has upon choice of signallingresource to be used for the conveyance of the short message

ISSN 0085-7130 © Telenor ASA 2004

Page 194: 100th Anniversary Issue: Perspectives in telecommunications

192 Telektronikk 3.2004

to be included in the short message delivered tothe terminal;

• Protocol-Identifier, identifying which protocol tobe performed at the application layer;

• More-Messages-to-Send, a Boolean included in theshort message delivered to the terminal to tell ifthere are more messages in the SC still to be sentto the recipient.

The alphabet

Now, what should be the alphabet of the short mes-sage? The WP1 had in [4] made a reference to theITU and ISO standards of International Alphabet no 5(IA5), which were designed for what were anticipatedto be the text services of the future, in particularMHS. In DGMH, we examined the IA5 standards,which were designed with the objective of providingdifferent regions of the world suitable alphabetswithin the framework of adequate character lengths,in particular 8 bits. The exercise of finding a suitablealphabet for SMS occurred chronologically just afterthe corresponding work item in ERMES, who hadapproximately the same focus as DGMH had at thattime: finding a sufficient set of characters for thewestern parts of Europe spending as few bits as possi-ble. The ERMES alphabet was a result of picking thecharacters from the most used alphabets while stillbeing able to wrap up the whole thing in a 7 bits nota-tion. We therefore proposed to use the ERMES alpha-bet as default, but opened up in the protocol for theuser to request other alphabets. Both IDEG and WP1supported this proposal.

A review of the work leading up toapprovalIt may be worth while to try to summarize what wasachieved, and mention the crew that made the results.

Merits and flaws of the SMS design

In my opinion, the merits of the SMS design were thefollowing

• Simplicity, both in terms of functionality and interms of architecture (e.g. only one SC in any MS→ MS messaging);

• Merge of the two original point-to-point servicesinto one service – SMS – with complete reciprocity‘mobile terminated’ and ‘mobile originated’;

• An SMS based entirely upon in-house capabilities,e.g. SS no 7 instead of X.25;

• Reception confirmation for MS to MS messaging;

• Automatic delivery of waiting messages to a recipi-ent just after he had switched on his mobile phone.

On the other hand, some major flaws are retrospec-tively not hard to pin-point:

• A protocol version number was not allocated at thetransfer layer, requiring new versions to be back-ward compatible. Apparently, none of the DGMHmembers were well-experienced protocol experts!

• We were not bold enough in terms of exploitingfuture possibilities for MS to MS conversations,e.g. group chatting. Both address conversion (e.g.E.164 ↔ name@domain) and handling of distribu-tion lists within the SC were discussed, but a num-ber of people clearly expressed that we had gonefar enough with our perspectives on SMS conver-sations!

• The same was the case with message templates,which was an idea inspired by transaction serviceswithin the X.400 domain and just very briefly andinformally mentioned within the GSM and DGMHcommunity. As with the above ideas, it did nothave the necessary support to be pursued. How-ever, it might have boosted SMS as a tool formCommerce!

The ‘SMS crew’

No individual expert or company should claim to bethe ‘father’ or ‘creator’ of any service or major func-tionality produced during the GSM development. TheGSM project was indeed a multi-national collabora-tion at its best. The cooperative working procedurewas the case also for IDEG and DGMH. The latterconsisted in my period as a chair of a group varyingfrom 5 to 8 people, all dedicated and contributing tothe ongoing work. I would in particular like to men-tion Alan Cox from Racal/Vodafone (later Voda-fone), Kevin Holley from Cellnet and Eija Altonenfrom Nokia. I would also like to compliment Fried-helm Hillebrand for being an extremely good chair-man of IDEG. Most of the IDEG participants in 1987were not familiar with international collaboration likeGSM, but in a very gentle and constructive way Fredencouraged them to immediately join in and do theirbest. Fred left the chair of IDEG for other GSMappointments in 1989, and was replaced by GrahamCrisp from Plessey Networks and office Systems.Graham had chaired the draft group on rate adapta-tion mechanisms (TAIW, Terminal Adaptation andInterworking), and thus became the first person fromindustry who took a chair in CEPT. Graham, whohad participated in IDEG from its first meeting, hadexactly the same exquisite skills in chairing the groupas Fred had exposed.

ISSN 0085-7130 © Telenor ASA 2004

Page 195: 100th Anniversary Issue: Perspectives in telecommunications

193Telektronikk 3.2004

I would also like to appreciate colleagues of my owncompany – in particular Jan Audestad and Knut ErikWalter for swift responses to our requests on MAPupgrades ([10]) and the establishment of radio inter-face functionality ([8]), and a series of good advicealong the line.

The tricky part: what can welearn from the SMS adventure –if anything at all?Everyone knows stories about the strange randomwalk characteristics of business and technologydevelopment; the yellow stickers from 3M, the chatline of the Swedish phone company, and so on. Likethose examples, many of them derived from internalmishaps and were just accidentally transferred to theproduction lines. Yet they became great successes.

The birth of SMS was definitely not due to a mishapor accident, even if the perception of SMS in 1987was – as stated earlier – not very clear. Luckilyenough, it was not excluded from the list. The storyhas a slight resemblance to those of the Norwegianfairy tale character Askeladden, who picks up allkinds of items that he encounters given the presump-tion that it may come to use some day. In the adven-ture they always do, resulting in a massive success. Inreal life, they sometimes pay off – as with the SMS.Trying to figure the same situation today, it is nothard to imagine the average modern executive imme-diately tearing the SMS concept of [4] into pieces:“When there is no extensive and convincing text ofmarket analysis, there should be no further transfer toa lengthy and costly design and production process”.The strange thing is that if one imagines the modernproduct development filtering on all other servicesthan SMS, they might have passed the checkpointprocedures without difficulties. The speech servicewas a banker, no one doubted that there was a sub-stantial potential of migrating telephony from thefixed to the mobile networks. The fax service alsohad a high standing: fax had been a popular service inthe fixed networks for years! The circuit switcheddata service also had its fixed network parallels thatmade perspectives of a high usage probable. Thus, forall three services it would have been fairly easy toproduce convincing arguments in the context oftoday’s product development forums why they shouldall be profitable. In this way, we can very well envis-age a situation where the methods of today wouldhave accepted fax and circuit switched data – the fail-ures – and discarded SMS – the success!

This should not be taken as polemic statementsintended to give the impression that the participantsof DGMH had some sort of ingenious formula or

supernatural gifts that enabled them to see whatnobody else saw: the full potential of SMS. Certainly,experience from earlier work and objectives had pro-vided the group a hunch that messaging betweenmobile users might be a very good idea and worth-while pursuing. However, no one within DGMH,IDEG or GSM was even close to comprehending thewilderness of applications that is provided by today’sSMS. mCommerce, the flora of CPA based services,customizing the mobile handset by download of therequired parameters, short message as the initiatorof push services; none of those applications werethought of even vaguely. They just popped up be-cause SMS was at hand, virtually from the start with-in and between any GSM network and easy to apply.

Finally trying to conclude, I would say that the suc-cess of SMS – unexpected among even the core GSMexperts – might be associated with the following keywords:

• Abundance and simplicity combined. The willing-ness to include some abundant dark horse that can-not be justified through clear-cut market analysis,but keeping in mind that also for an item of thiscategory the rule applies that market appeal is pro-portional to simplicity;

• Hunch. The willingness to adopt, trust and supportsome ideas – not all, not even many – that givesome elusive perception of great potential that can-not be justified through plain and clear-cut marketanalysis;

• Risk. The willingness to take some calculated riskwhen deciding upon the design;

• Seeing business in a broad and long-term perspec-tive. The willingness to accept and endorse cate-gories of work that open up vast new businessareas, even if they will be available also for one’scompetitors and even if it may take several yearsbefore it pays back.

The development of GSM almost coincided with ahuge paradigm shift in the business of telecommuni-cations: leaving the age of monopolies and enteringthe age of the liberalised markets. The benefits of thistransition – e.g. in terms of price reductions, moreeffective sales and distribution channels, and flexibleand customer oriented production lines – have beenemphasized ad nauseam, and will be contradicted nei-ther by me nor by anybody else. But no change isentirely good or bad. With the shift mentioned abovethere was also something lost. The corporate environ-ment that fostered the characteristics listed above forthe ability to take substantial leaps forward – e.g. the

ISSN 0085-7130 © Telenor ASA 2004

Page 196: 100th Anniversary Issue: Perspectives in telecommunications

194 Telektronikk 3.2004

Finn Trosby graduated from the Norwegian Institute of Technology (NTH) as Chartered Engineer in 1970.

He entered the R&D department of Televerket/Telenor in 1972, and since 1980 his main work area was in

mobile communications, mostly related to system aspects relevant for new technologies. From 1987 to

1990, he chaired the Draft Group on Message Handling in the working party responsible for designing the

data services in the GSM system. From 1990 to 1996 his main work was design of tools for the GSM opera-

tor. In 1996, he entered Telenor’s mobile operator in Norway, Telenor Mobil, where he has worked since then

with company strategy.

email: [email protected]

cardinal ‘hunch’ – was far more apparent in thedinosaur-like telcos of the past than it is the stream-lined and ever cost reducing operating companiesof today.

‘Hunch’ is what you get when – in between thetightly scheduled tasks of today’s demands – you areallowed to stray into areas of terra incognita withoutalmost any other purpose but to explore. The ‘Mobiltspesialnett’ endeavour was one such exploration ofmine, and it meant a lot to my qualifications for car-rying out the objective that we were confronted with.I am sure that the other people involved with SMS –in WP1, IDEG and DGMH – had their correspondingstrays, and that those were equally beneficial to them.The previous telco’s could afford that luxury. Thepresent ones cannot, and the soil is inevitably less fer-tile. Thus, today’s SMS chatting crowd can be happythat the GSM system definition phase occurred wellwithin the era of the previous regime. I’m not quitesure that the SMS sketches of 1987 would havepassed the WP1 examination if its members hadpossessed the mindset of the operator communityof 2004.

References1 Hillebrand, F (ed.). GSM and UMTS. The cre-

ation of Global Mobile Communication. JohnWiley, 2002.

2 Mobile Networks for Special Purposes (Mobiltspesialnett). Study on messaging for mobile com-munications carried out at the R&D division ofTeleverket/Telenor in mid 80s.

3 GSM 02.02 – Bearer Services (BS) Supported bya GSM Public Land Mobile Network (PLMN).

4 GSM 02.03 – Teleservices Supported by a GSMPublic Land Mobile Network (PLMN).

5 GSM 03.40 – Technical Realization of the ShortMessage Service Point-to-Point.

6 GSM 03.41 – Technical Realization of the ShortMessage Service Cell Broadcast.

7 GSM 04.08 – Mobile radio interface layer 3 spec-ification.

8 GSM 04.11 – Point-to-Point (PP) Short MessageService (SMS) Support on Mobile RadioInterface.

9 GSM 04.12 – Short Message Service Cell Broad-cast (SMSCB) Support on the Mobile Radio Inter-face.

10 GSM 09.02 – Mobile Application Part (MAP)Specification.

ISSN 0085-7130 © Telenor ASA 2004

Page 197: 100th Anniversary Issue: Perspectives in telecommunications

195

IntroductionTelenor (then Televerket) had been very active in thestandardisation of GSM. Televerket had the chair-manship in SCEG (Speech Coding Expert Group), weparticipated in WP1 dealing with services, in WP2dealing with radio subsystems, and Televerket alsohad the chairman and editors for specifications inWP3 (protocols). Also, Televerket financed thedevelopment of the radio modem designed by theElectronic Laboratory in Trondheim (ELAB), whichpaved the way for GSM as we know it today (see alsoTorleiv Maseng’s article in this issue).

A lot of the ideas generated by Televerket and ELABcan be found in the GSM specifications. Hence theGSM system must be said to be a huge triumph, seenfrom the Norwegian researchers being involved in theprocess of specifying GSM. So, we would expect thatNorwegian industry should be well positioned forentering the foreseen and important GSM market,growing a national GSM industry on the basis of thework done by Televerket and ELAB.

Trying to build a Norwegian GSMindustryIndeed, the idea of building a Norwegian GSM indus-try was elaborated, and a consortium was founded tojoin the Norwegian forces towards an industrialisa-tion of GSM and to give Norwegian industry playersa technological lead in the implementation of GSM.The consortium was founded by ELAB, ElektriskBureau (EB) Telecom, and Simonsen Elektro. Tele-verket’s contribution to the consortium was to makeavailable for the consortium the know-how of thetechnical foundation of GSM, and to be the customer

of a trial system. At that time it was estimated thatTeleverket had put a research effort of 50 mill Nor-wegian kroner (NOK) into research and standardisa-tion activities on GSM.

EB was one of the Norwegian industry giants in thearea of electronics and telecommunications (manu-facturing, amongst others, fixed line telephones andtelephone switches). Simonsen was well known fordesigning and manufacturing high-end waterproofNMT telephones. The idea was that the consortium ofELAB, EB and Simonsen would design a GSM trialsystem that Televerket would buy and operate. Thiswould be the first step to prepare the Norwegian tele-com industry for entering the GSM market and takeadvantage of the Norwegian effort put into GSM.Televerket would contribute with know-how on theGSM standards, ELAB would contribute with know-how on implementation of GSM, EB would manufac-ture base stations and Simonsen would provide theprototype GSM mobile station.

The project started in 1988 with planning andspecifications of a prototype system and whatsoeverneeded to industrialise GSM in Norway and take advan-tage of the relatively large effort that the Norwegianplayers had put into GSM standardisation for years.

The collapse

However, the outcome of the project on the Nor-wegian GSM industrialisation was not much toannounce. The consortium more or less collapsed in1989 when Ericsson, who also had their own activi-ties on GSM, bought EB Telecom. At the same time,Simonsen had got into financial trouble – it was noteasy being a small company having only one leg to

The Norwegian GSM industrialisation – an idea that never took off

R U N E H A R A L D R Æ K K E N

Rune Harald

Rækken is

Senior Research

Scientist in

Telenor R&D

Telektronikk 3.2004

While Sweden has Ericsson and Finland has Nokia, what did Norway get out of the intense partici-

pation, breakthrough of ideas and the acquired knowledge from the GSM specification work? From a

scientific point of view, there is no doubt that the Norwegian effort put into the GSM work was a huge

triumph. The choice of a narrowband solution for GSM gave a cost saving in the range of billions of

kroner (NOK) during GSM deployment in the sparsely populated Norway, compared to wideband

solutions proposed and strongly supported by other GSM players and even at political level. In the

1980s Televerkets Forskningsinstitutt (Telenor R&D) had a role as a national telecommunications

industrial locomotive, and put a lot of effort into paving the way for the Norwegian GSM industry. The

utilisation of the Norwegian GSM position from the national industry players is however nothing less

than the largest disappointment that has been seen in the history of Telenor R&D. It is now the time

to reveal some of the events that happened in the late 1980s when an effort was made to start

growing a national GSM industry based on the research and standardisation effort done by Televerket

and the Electronic Laboratory in Trondheim – ELAB.

ISSN 0085-7130 © Telenor ASA 2004

Page 198: 100th Anniversary Issue: Perspectives in telecommunications

196 Telektronikk 3.2004

stand on – selling high-end mobile phones in a mar-ket where the big international players had dumpedthe price level of mobiles far below the productionprice of Simonsen phones.

Simonsen however delivered the promised prototypeof a GSM telephone. But the company never managedto take GSM phones into production, and a few yearslater the name and company of Simonsen was history.

So the Norwegian Ericsson or Nokia never even cameto birth. The large Norwegian companies withdrewfrom GSM development in respect for their foreignowners. The smaller companies never had the re-sources to overcome such a big task. And that was thebrutal end of the Norwegian GSM industry fairytale.

The Nordic GSM trial systemTeleverket promoted the Norwegian GSM consor-tium. But it was realised that the Norwegian telecomindustry would not be able to deliver vital GSM sys-

tem components in time for a GSM trial system thatTeleverket could use to gain practical experience onGSM before contracts on a commercial system had tobe placed. Hence the strategy was changed. Togetherwith the other Nordic telecommunications adminis-trations Televerket started to plan a common NordicGSM trial system.

The goal of the Nordic trial system was among otherthings to verify the radio subsystem of GSM andevaluate GSM performance in different environments(terrain types). The system was to be established inthe Oslo area, and the plan was to have a branch tothe Copenhagen area later on to evaluate GSM per-formance in flat terrain.

The next planned phase of the project was for theNordic telecommunications administrations to per-form national tests using the system.

It should be noted that the Swedish Televerket hadtheir sole GSM field trial together with their nationalindustry Ericsson. They however also joined theNordic GSM trial project.

Several vendors were invited to give a tender andimplementation plans for a Nordic trial system. InAugust 1988 the Danish company Storno was chosenas vendor of the GSM trial system. At that timeStorno had been bought by Motorola, so in practicethe trial system was a Motorola system. However, theStorno technicians were very amused talking abouttheir sub-contractor Motorola.

The lead-time was very short, as the plan was for thetrial system to be in full operation at the beginning of1989. Motorola had the delivery of a correspondingGSM trial system in the UK.

The system was located at Televerkets Forsknings-institutt (Telenor R&D) at Kjeller outside Oslo, andconsisted of one switch, four base stations, eightinterference generators and three mobile stations(mounted in 19” racks).

Experimental technology

It soon became quite evident both that GSM at thattime was an experimental technology, and that GSMis far more complex than NMT1). We soon got intodiscussions with the Storno/Motorola technicianswho wanted to stay at the test site day and night tomake the system operate properly. We did not meetthis request however. We regarded 16 working hoursa day to be sufficient.

Europe’s first GSM trial system arrives atTeleverket’s Research Department (Telenor R&D)at Kjeller in 1989. The author (left) and SeniorEngineer Geir Trøan from Televerket’s ServicesDepartment are taking parts of the equipment intothe building

1) As a curiosity, I can mention that in the second part of the 1970s, there were vendors claiming that NMT was so complex that it couldnever be implemented. I think that some people could get the same idea about GSM in those prototype days.

ISSN 0085-7130 © Telenor ASA 2004

Page 199: 100th Anniversary Issue: Perspectives in telecommunications

197Telektronikk 3.2004

The technicians often received new and updated codefrom Motorola in USA when they came to the testsite in the morning, and there was a struggle to makethe system behave in a stable way to give reliable andreproducible test results.

After a long period of alternate successful GSM callsand disconnected calls, and ever lasting system opti-misation and improvement, acceptance tests finallystarted in the autumn of 1989. We did a lot of mea-surements to verify compliance with the specifica-tions set for the test systems. It turned out, however,that the system was not 100 % stable, and sometimesit was impossible to make reproducible tests.

Hence, after a while it became evident that the trialsystem did not meet the expectations and the purposewe had when we entered the project. It turned out thatwe could not utilise the test systems to do a detailedmapping of the performance over to a commerciallyoperated GSM system, to get the flying start as GSMoperators that we wished for.

However, during the work with the trial system therepresentatives of the Nordic telecommunicationsadministrations got a lot of hands-on experienceworking on GSM and what the implications of GSMmight be. So did Motorola.

BibliographyAnnual report 1987. Kjeller, Telenor R&D (Tele-direktoratets forskningsavdeling), 1988.

Annual report 1988. Kjeller, Telenor R&D (Tele-direktoratets forskningsavdeling), 1989.

Annual report 1989. Kjeller, Telenor R&D (Tele-direktoratets forskningsavdeling), 1990.

Collett, J P, Lossius B O H. Visjon, forskning, virke-lighet. Televerkets forskningsinstitutt 25 år. Kjeller,Norwegian Telecom Research, 1993. ISBN 82-423-0268-5

For a presentation of the author, turn to page 169.

ISSN 0085-7130 © Telenor ASA 2004

Page 200: 100th Anniversary Issue: Perspectives in telecommunications

198 Telektronikk 3.2004

BackgroundThe CEPT environment, in the early 1980s when theGroupe Spécial Mobile was born and fostered, wasthe cooperative framework of the governmental tele-com monopolies. The responsibility of these mono-polies was often twofold: In addition to operating themain national telecommunication systems they alsohad the regulative task of licensing possible privateuse of radio equipment and setting the functionalrequirements for such equipment. The latter engage-ment did not always include a commitment to actu-ally imply these requirements in the national market.In fact, it happened more than once that a membertabled strict opinions in the discussions on a newradio standard and afterwards declined to introducethe system at home. In other words, the elaborationand adoption of a CEPT standard did not automati-cally oblige the member countries to the correspond-ing system deployment and operation. Manufacturingequipment to a CEPT recipe could then be a some-what risky business for the vendors; and if the stan-dard was comprehensive and on the borders of thestate of the art, as the GSM standard was, investingin research, development and production could meanhigh risk and little business.

And in addition, the mobile telephony environmentin Europe at that time was such that a number ofcountries had introduced analogue systems to buildup their national markets. These systems normallydiffered from country to country, and cross-borderroaming was no issue, except in the Nordic countrieswhere the national operators/administrations hadworked out their common NMT system. So why,one could ask and many did, should a country (i.e.a national operator, since monopoly was the generalmodus vivendi) invest in a new, immature system ifthere was no certainty of the other European coun-tries following suit? Something new and alluring wasnecessary to convince the end users of the change:Roaming across European borders. And a further pre-

requisite for successful introduction of a new systemwas acceptable equipment prices, i.e. the economy ofscale in mass production.

A new system was desired by politicians, as thiswould be an ample opportunity for introducingcompetition also within telecommunications and,of course, form a basis for a new deal in Europeanindustrial engagement.

So, the Western European community needed a dec-laration to move forward: The GSM Memorandum ofUnderstanding was signed in a CEPT Plenary meet-ing on 7 September 1987 by 13 countries:

• The Quadripartite Group: France, Germany, Italy and the UK

• The Nordic Group: Denmark, Finland, Norway and Sweden

• The rest of CEPT Group: Belgium, Ireland, the Netherlands, Portugal andSpain (actually signing 3 days later)

Signing later, but not much later, were Austria,Greece, Luxembourg, Switzerland and Turkey.

The MoU documentThe mediating hand in writing and bringing forwardthe MoU was that of the UK delegation.

The UK was one of the quadripartites, mentionedabove, who were allied through an agreement ofcooperation, originally based on the French-GermanGSM system research program. Members of thisgroup had made considerable investments in thedevelopment of the GSM system, and it was vital forthem in particular not to lose the momentum in thework towards network implementation. These four

MOU of the GSM-MoU:

Memorizing Old Undertakings of the GSM-Memorandum of Understanding

P E T T E R B L I K S R U D

Petter Bliksrud is

Senior Adviser in

Telenor Nordic

Mobile

When the GSM specifications were mainly finished in the late 1980s, European operators were reluc-

tant to invest in a new and immature system without knowing whether other countries would do the

same. In order to move forward, the Western European community needed a declaration, and in 1987

the GSM Memorandum of Understanding (MoU) was signed by 13 countries. The article goes through

the ideas and contents of the MoU document as well as the cooperation under it. The operators took

on a heavy commitment and worked together for the success of GSM. The opportunity called, and it is

not certain that the regulative framework, market conditions and business willingness will combine to

allow similar undertakings again.

ISSN 0085-7130 © Telenor ASA 2004

Page 201: 100th Anniversary Issue: Perspectives in telecommunications

199Telektronikk 3.2004

countries had internal meetings to coordinate theiractivities (and their marketing of them), in the CEPTwork and towards the MoU initiative. From time totime, these meetings included Sweden, representingthe Nordic countries. This extended participationwas probably a recognition of the fact that the mostadvanced area in mobile communications was highup north and that suspicious jealousy, caused by lackof information, could impair the concerted efforttowards the common goal.

The UK telecommunications manufacturing industrywas indeed minor to the industry on the continent,but the UK was clearly leading in, and bringing ex-perience from, a field of emerging importance: com-petition. The UK participation under the MoU wasdivided between two operators, BT/Cellnet andRacal/Vodafone, and one administration, the Depart-ment of Trade and Industry, to reign them and reinthem both in case of inclinations to diverge opinions.This dual operator mode in the meetings was in thebeginning clearly a strange one to the rest of the con-gregation, but a seed soon to grow in European tele-com cooperation.

The original MoU document was made in two lan-guages, French and English, but none to precede theother. This fact was signalling modernity with theFrench language no longer peremptory. The clausewas not cumbersome, really, since detailed exegesiswas never expected to become an issue. The otherdocument clauses just stated the aims of the signato-ries, in broad terms, and set some rules on how tocooperate to fulfil them.

The Cooperation under the MoU

Signatories

The MoU document stated that a signatory could beany telecom operator, within CEPT, licensed in hiscountry to provide “public digital cellular mobiletelecommunications services” and, in addition, anyCEPT telecom administration. In principle, the MoUcooperation has never been a pure Operator’s Club,and the UK Department of Trade and Industry (DTI),the original regulator member, has contributed sub-stantially to the work over the years. But few otherAdministrations/Regulators have seized the opportu-nity to become members and thus have informationstraight from the horse’s mouth. The Norwegian Postand Telecommunication Authority (NPT) once testedparticipation in a plenary some time after the firstinitial phase of the MoU cooperation. They did notapply for membership afterwards, so maybe the infor-mation was not as juicy as we operators would liketo believe.

The organisation of the work in the MoU

group

The Memorandum prescribed meetings to be held andthat a Chairman be elected for a period of six monthsand then each operator was in turn obliged to nomi-nate a Chairman This clause was amended beforeevery signatory was called upon in this respect. It isfair to say that some operators saw this requirementfor nomination as an opportunity, not a burden, whileothers would rather procure equipment than chair-men. The first MoU meeting was held in London inOctober 1987, during a GSM Plenary, the so-calledcandlelight meeting, owing to a gale massively dev-astating power supplies in the city and forcing themeeting to work by candlelight. The MoU meetingwas called by personal approach by the convenor, andthe proposed agenda handwritten. The Danish repre-sentative missed this first meeting; he never investi-gated whether this was owing to his visiting therestroom just when the call went out (i.e. the con-venor went around), or whether he was simply over-looked in the candlelight gloom.

The work of the MoU group was divided into a num-ber of sub-groups. The actual tasks allocated to thesegroups were a mix between anticipated urgency,workloads and the prestige of having a sub-groupchairman. The number was adapted to the mostimportant participating operators (the Quadripartitesplus two extras). In one instance, when the nomina-tion of a chairman to a group was challenged, thematter was immediately solved by splitting the group.Not an unknown organisational technique.

Procurement

The document clearly reflected the industrial policyinvolved in the launch of the new digital system: theoperators’ procurement policies were to “encouragea strong competitive European industrial manufactur-ing base”, of course properly within the constraintsof international trade agreements and operators’ owncost efficiency. Such a clause is perhaps not as easilydigested among operators today, even if the real com-mitment to industrial bias is formally not very high.When the four Nordic signatories commonly pro-cured a GSM test system that was not European, thisclause did not create uneasiness.

It was an MoU goal that signatories should open theGSM service in 1991, i.e. about four years after theestablishing of the MoU. It was a good choice,whether the result of clever reasoning or sheer luck.Some operators needed the new system as soon aspossible as an alternative to the exhausted analoguenetwork, to them four years was a long time. But thespecifications were really immature in 1987, and a

ISSN 0085-7130 © Telenor ASA 2004

Page 202: 100th Anniversary Issue: Perspectives in telecommunications

200 Telektronikk 3.2004

shorter time would not be realistic, if the systemshould be expected to work properly.

The procurement rules to be followed were quiteextensively described. In fact it is the only reallydetailed instruction in the MoU and it filled an annexof its own. The procurement process to be followed,by all signatories, included system validation phaseswhere enough equipment should be procured to ver-ify the system; first the air-interface and the inter-faces between the base stations and the switch andthen to procure more pieces to validate interfaces toother networks and interworking with other networks.Test systems appeared around Europe and tests wereperformed to the benefit of the operators, and actuallyof no less benefit to the infrastructure manufacturers.

The GSM system did work, but the first networkswere actually out of work for quite a long time: therewere no mobiles to be bought. Part of this delay wasthe type approval, or lack of it. Several test houseswere established, preparing for type approval compe-tition. However, it turned out that the necessary typeapproval equipment required investments and whenthere seemed to be more test houses than mobiletypes, investment willingness faded and resulted insome of the more solid operators buying the simula-tor under the auspices of the MoU cooperation.

The GSM standard

The MoU obligations of course referred distinctly tothe CEPT recommendations for the digital system inthe 900 MHz band. The document also instituted theMoU group as an instrument to help CEPT get out ofpossible deadlock situations caused by the require-ment for unanimity. Voting was introduced in MoUwith weighted voting rights for the national signato-ries, the weighting taken from a table of somewhatobscure origin. If a difficult situation should occur inCEPT, an MoU meeting could be called and a votingheld on the subject, not unlike the voting in the Euro-pean Song Contest and the signatories would all abidewith the majority view in CEPT. Voting was nevertaken, the scheme served, in itself, as a deterrent.

Network deployment

The MoU document carried obligations on the signa-tories to cover particular areas in their countries in anearly roll-out phase of the network, such as capitalcities, main roads and main airports. This gave MoUmembers a possibility to criticize other members’coverage plans (and corresponding roaming possibil-ity for neighbouring countries), and some could notresist the opportunity of such criticism. In one partic-ular instant in the early days of the MoU, the discus-sion between two representatives became very heated

before it all calmed down to a level of more normalroaming negotiations.

Promotion

The document also focussed on the promotionalaspect. The signatories were required to make effortsto extend the GSM system to all territories of CEPT,i.e.Western Europe, as well as to provide informationto the public in their own territories of the glory ofthe new digital, cellular system. It was even ventured,as proposed by the Nordic Administrations, to try topromote the standard for take up in the territories out-side CEPT; that is to make the standard really inter-national. A dedicated sub-group was given the task ofworking out generic material for such national pro-motion. This group was also tasked to come up with abetter name for the system, marketwise. Groupe Spé-cial Mobile was not felt piquant enough for the broadpublic, even if it was in French. The exercisefocussed on snappy names containing popular sylla-bles like: net, com, cell, mob and tel either in combi-nation or spiced with glorifying adjectives (super,universal). The surprise was not so much the numberof permutations proposed, but the fact that any one ofthem would be in use somewhere in the world andthus would have to be bought from some hithertounknown company. Since the acronym GSM wasalready ubiquitous, far more ubiquitous than GSMpieces of equipment and actually constituting a brandname, it was in the end decided to keep the letters butchange their interpretation. “Global System forMobile communications”, however, leaves the lastword, and maybe the most important one, somewhatin limbo.

Intellectual property rights

The signatories should align their policies on theIntellectual Property Rights (IPR) to ensure that themain interfaces in the standard were open. The gov-ernmentally funded research projects in some coun-tries forbade the exploitation of patents and the par-ticipating operators and industry could be handi-capped if others were free to apply patents. The MoUgroup therefore drew up a patent declaration to beissued with the national calls for tender, conditioninglicence free conditions for the purchase. This turnedout to be putting operator hands into a manufacturerbeehive where patents are honey for trade. In particu-lar Motorola objected and they indicated that theyhad applied for essential GSM licences, the contentsof which could not be revealed according to thepatent rules. The company and the MoU group mettrying to agree in substance, and when that failed, ona text flexible enough to delay possible confronta-tions. The meetings were many in number and longin duration, but the avail was essentially that theproblem of the antagonism between standard making

ISSN 0085-7130 © Telenor ASA 2004

Page 203: 100th Anniversary Issue: Perspectives in telecommunications

201Telektronikk 3.2004

and patents were transferred to a more general levelin the new European standard organisation (ETSI).The operators bought their GSM equipment withouttoo much patent ado. The possible patents concealedin the deals were apparently exchanged on fair andreasonable terms.

Expansion of the MoU documentThe work in the MoU Group turned out to be quitesuccessful, so that clauses in the document, oncegoals to strive at, turned to constraints and the focuson Western Europe became too narrow. In 1991 thefirst non-European country, Australia, applied formembership.

Therefore, in 1991 an addendum was made to theoriginal paper. Possible members from outside CEPTwere released from the bonds to support Europeanindustry and the initial requirement to open the servicein 1991 just had to be adjusted if one would avoidbecoming a very closed interest group as time went by.

At that time the UK had once again been the van-guard in competitive regulation and had introducedthree licences for GSM at 1800 MHz, also calledPCN (Personal Communication Networks). Thesenewcomers formed their own, competing operatororganisation, to become, however, quite short livedbefore taken up in the real MoU. Their system (GSMin 1800 MHz) was not allowed to be called GSM1800 until later and they had to abide for a while withthe pseudonym, DCS (Digital Communication System).

New housekeeping rules were introduced. Chairmenwere to be elected from volunteers, term of officeprolonged to one year and it was necessary to putup a permanent secretariat. After a brief visit to theHague, the MoU came to pick the fair city of Dublinas its home town.

The frontier era was over in the mid-nineties. TheMemorandum of Understanding left the arena and thecooperation re-entered a new outfit, namely as anAssociation. It just had to be so, to cater for a sub-stantial increase in member operators and new chal-lenges and new undertakings. The Magna Carta ofthe organisation has now changed a bit. The old doc-ument had goals defined in detail and only few work-ing rules. The new document has goals that are moregeneric and more loosely defined since they shallcover a plethora of members and should last longer.Details (whether or not the Devil is in them) havejumped from goals to rules.

RetrospectiveIf the telecommunications world is a stage, the play“GSM” has been a box office hit. The operators haveboth been playwrites and acting in main parts. Somewould say that the manufacturers were the protago-nists, as always, since vast equipment developmentshave conditioned the success. A touch of compla-cency by the operators should be condoned, though.We took on heavy commitments and worked togetheras it was required when opportunity called. It is notcertain that the regulative framework, market condi-tions and business willingness will properly combineto allow similar undertakings again.

Petter Bliksrud (62) graduated from the Norwegian Institute of Technology in Trondheim, Norway, in 1968

and was subsequently employed by Norwegian Telecom. He has been involved in all areas of mobile

communications, both terrestrial and satellite. In particular, he has participated in the ITU (CCIR) develop-

ment of maritime mobile services and in international search and rescue satellite projects. In the 1980s

he was engaged in the CEPT/ETSI specification work on the GSM system and in establishing the GSM

Memorandum of Understanding and was in the succeeding decade actively participating in several

international bodies on mobile communications. His present area of work within Nordic Mobile in Telenor

concerns primarily questions related to national and international telecom regulations.

email: [email protected]

ISSN 0085-7130 © Telenor ASA 2004

Page 204: 100th Anniversary Issue: Perspectives in telecommunications

202 Telektronikk 3.2004

Taking over the responsibility for an institution likeTelektronikk soon made it clear that I had to educatemyself in the history of the journal. Even though Ihad been involved with the journal since 1997, as edi-tor for the Status section, I had not spent much timelooking back into its past.

I am sure most of my predecessors have felt the sameneed. For me it became more intense when the 100thanniversary was coming up. Spending hours and daysbrowsing the archives and reading previous issuesand volumes made me also want to bring some of theknowledge out to our readers.

This section, ‘From the archives’, appears in thisspecial Anniversary Issue and contains two articlesfrom the early years. The idea is to illustrate howTelektronikk has imparted news and knowledge aboutimportant events and technologies in previous times.

The following articles describe two pioneering eventsin early telecom: The first transatlantic telegraphcable in 1858, and the first Northern European wire-less telegraph established in Lofoten in North Nor-way in 1906. They are both based on early articlesin Telektronikk’s predecessor, Technical Informationin 1906 and 1907 respectively.

The transatlantic telegraph cable from 1858 broke theground (or should we say: ‘sea’) for connecting thecontinents telegraphically. A lot of the physics ofsuch a connection was unknown and had to beexplored, and the problems had to be solved along theway. It involved contemporary engineers and scien-tists like Charles Wheatstone and William Thomson(Lord Kelvin). Even if it operated for only 27 days,it paved the way for permanent installations.

Wireless communications has become the mostimportant business of today’s telecom operators,Telenor included. Right from the beginning, Norwayhas been an early adopter of wireless technology tocover telecommunications needs. Just a few yearsafter Guglielmo Marconi had demonstrated that radiowaves could reach beyond the horizon, Telegrafver-ket installed the first North European permanentwireless telegraph. It was actually the second in theworld.

Next year, in 2005, Telenor can celebrate its 150thanniversary. Snippets from the archives may wellappear under the ‘Kaleidoscope’ heading togetherwith other contributions of an historic nature.

Introduction: From the archives

P E R H . L E H N E

Per H. Lehne

is Research

Scientist at

Telenor R&D

and Editor in

Chief of

Telektronikk

For a presentation of the author, turn to page 9.

ISSN 0085-7130 © Telenor ASA 2004

Page 205: 100th Anniversary Issue: Perspectives in telecommunications

203

IntroductionBoth manufacturing and deploying a sub sea cableof approximately 3,000 km length was a risky andexpensive project in the 1850s. There were a lot ofunknowns and the first cable, which cost a total of£ 465,000, only worked for less than a month. Thisarticle contains a summary of the original article fromTechnical Information (Telektronikk, see [1]) in 1907.Additional information has also been gathered. Abook from 2003, ‘The Cable – The Wire that changedthe World’ by Gillian Cookson [2] tells the full storyof this daring project. However, we will start withsome introductory notes on Samuel F.B. Morse andhis ‘electric’ telegraph as well as the introduction ofthe telegraph in Norway.

The Morse telegraph

Samuel Finley Breeze Morse (1791 – 1872) is recog-nized as the inventor of the electric telegraph. Morsewas an American artist, educated in England, butbefore that he had studied at Yale College, and wasexcited by lectures of the then newly-developing sub-ject of electricity.

In 1832 Morse was on his way home from Europe onthe ship Sully. Conversations with Charles T. Jack-son, Professor in Physics at the University of Boston,gave Morse an idea. Jackson had been to Paris andheard the latest news on research in electricity. Dur-ing the trip across the Atlantic Morse had developedand sketched the main concept for a telegraph appa-ratus based on electrical pulses.

It is believed that he had his first telegraph modelworking in 1835 in the New York University buildingwhere he taught art. In his model he used crude mate-rials: an old artist’s canvas stretcher to hold it, ahome-made battery and an old clockwork to move thepaper on which dots and dashes were to be recorded.With the aid of new partners, Morse applied for apatent for his new telegraph in 1837. Morse was dis-couraged from his art career and was devoting nearlyall his time to the telegraph.

His telegraph was exhibited in New York in 1838,and there Morse transmitted ten words per minute.Instead of using the number-word dictionary (seeframe about the Morse alphabet), he used dot-dashcode directly for letters. The Morse code that was tobecome standard throughout the world had essentiallybeen introduced [3].

The Morse code was dominant until the 1930s, butalready in 1902, Charles L. Krum, a mechanical engi-neer and vice president of the Western Cold StorageCompany, had started to develop the teletypewritermachine. This differed in many ways from the currentinterest of the telegraph engineers, which was to fur-ther develop the Morse telegraph. Krum equipped histeletypewriter with a keyboard similar to an ordinarytypewriter. Speed telegraphs, like the Wheatstone– Creed Morse telegraph relied on special trainedpeople and had problems with the synchronisationbetween the transmitter and receiver. The synchroni-sation problem was avoided by introducing start andstop signals for each combination of impulses (char-

The transatlantic telegraph cable of 1858and other aspects of early telegraphy

P E R H . L E H N E

Per H Lehne

is Research

Scientist at

Telenor R&D

and Editor in

Chief of

Telektronikk

Telektronikk 3.2004

From the very beginning, Telektronikk has made room for historical articles on inventions and

major events in telecommunications. The first transatlantic telegraph cable was laid in 1858 and a

comprehensive article on ‘Cable telegraphy’ from 1907 describes the project. It also describes other

sub sea cable projects from the time of the early telegraph in the 1850s until 1907. Cables and cable

technology was of major concern for the Norwegian Telegraph Administration for many years,

because most of the investment cost in establishing the telegraph and telephone networks lay here.

In 1844, Morse demonstrated to the US Congressthe practicality of the telegraph by transmitting thefamous message “What hath God wrought” overa wire from Washington to Baltimore. The pictureshows Morse’s telegraph receiver, which he used atthe demonstration in 1844 (Smithsonian NationalMuseum of American History)

ISSN 0085-7130 © Telenor ASA 2004

Page 206: 100th Anniversary Issue: Perspectives in telecommunications

204 Telektronikk 3.2004

Great Britain and the Netherlands used a differentapproach to combine it with the telephone service,but this was later abandoned. The teletypewriterexchange (TWX) service in the USA was introducedin 1930, but it did not apply with the recommenda-tions from CCIT, thus making it difficult to intercon-nect the European and American telex networks.

The electric telegraph comes to Norway

The Norwegian State Telegraph (Statstelegrafen)1)

was established January 1, 1855 with the first Morsetelegraph line between Christiania (Oslo) and Dram-men. Already in the first year of Technical Informa-tion, in 1904, the 50th Anniversary of the public tele-graph in Norway is treated.

But before Statstelegrafen, in 1853, a telegraph lineowned by the first railway company in Norway wasopened between Christiania2) and Strømmen/Lille-strøm (approximately 20 km east of Oslo). Telektro-nikk, issue 2, 1961 contains the lectures from theNorwegian Telephone Engineers Conference (Tele-foningeniørmøtet – NTIM) in 1960. In this issue,Head engineer L. Saxegaard from the NorwegianState Railways (NSB) introduces a lecture about theState Railways’ telecommunications with this fact.

On September 19, 1853, the year before the first Nor-wegian railway (‘Hovedbanen’ between Christianiaand Eidsvoll) was opened, the first railway telegramwas dispatched from a workmen’s barrack some-where between Strømmen and Lillestrøm, east ofOslo. The sender was Mr. Greener, the telegraphconstructor for the railway, and it was received byGreener’s Norwegian assistant, Christian Wiger. Themessage was simple: “The telegraph in order betweenhere and Christiania.” Wiger later became NSB’stelegraph inspector, a position he had for over 50years.

This was not a Morse telegraph but used a double-needle system with a different alphabet. In March1854, the line was opened also for private traffic,and it was frequently used for ordering transport.The price of a transport-telegram was 12 ‘skilling’(40 ‘øre’)3) for 15 words. Other types of telegramscost twice as much. Up to 70 telegrams per day weredispatched.

The Morse alphabet (1832 – 1999)

The original Morse code from 1832 was a translation system consisting of two

essential parts:

• A two-way code book or dictionary in which each English word was

assigned a number (and in order to spell out proper names, unusual

words, initials, etc., when necessary, each letter of the alphabet was also

assigned a number), and

• A code symbol for each digit from 0 – 9 to represent that number.

The sender would convert each word to a number, send that number and the

receiver would then convert it back again to the English word with a reverse

dictionary.

Later, the dot-dash code was invented and was eventually named the Inter-

national Morse Alphabet and adopted by the ITU.

The International Morse Alphabet has later gone through smaller changes and

amendments. As an example, the international distress signal ‘SOS’ (as one

sign) was made in 1906, when radiotelegraphy appeared. As late as in the

1960s special signs were introduced for e.g. first and last parenthesis.

Morse’s system was practically supreme in transferring text from the start in

the 1840s until 1910–20 when the telex arrived.

Also in Norway, when the State Telegraph was established in 1855, Morse’s

system was used. In the mid 1950s the use of Morse coding was completely left

in the Norwegian network. On the wireless, the last fixed communication links

were migrated to telex lines during the 1960s.

31 January 1999 was the last day that the International Telecommunications

Convention demanded knowledge competence in Morse telegraphy aboard

ships over a certain size.

1) Norw.: ‘Statstelegrafen’. The name of the national Norwegian Telegraph and Telephone has changed throughout the years. We willuse the Norwegian terms ‘Statstelegrafen’ for the early years and ‘Telegrafverket’, which was the name until 1969. Then, the name‘Televerket’ and ‘Teledirektoratet’ came into use, which in English was common to name ‘Norwegian Telecom (Administration)’.From January 1, 1995, the name of the company is ‘Telenor’, which makes no difficulties in English.

2) The Norwegian capital, Oslo, had the name ‘Christiania’ until 1925.3) The Norwegian currency became ‘krone’ = 100 ‘øre’ in 1875. Before this, the system was based on ‘Speciedaler’ = 120 ‘skilling’.

1 Norwegian ‘krone’ was defined to be 1/4 ‘Speciedaler’, thus 30 ‘skilling’.

acters). A new code was introduced, using five im-pulses per character. It was originally invented byÉmile Baudot around 1874 and adopted by the CCITas the ‘International Alphabet No 1’ (IA1). Later, itwas improved by Donald Murray by adding extracharacters and ‘shift’ codes. The code is still knownas the ‘Baudot code’ also known as the ‘InternationalAlphabet No 2’ (IA2). The code was used as longas the teleprinter service was operational. The term“baud” (a measure of symbols transmitted per sec-ond) is named after Baudot.

In 1934, CCIT’s Plenary Assembly in Prague dis-cussed the matter, and a test system for the new telex-service was demonstrated. The name ‘telex’ is shortfor ‘teleprinter exchange’ and became the internationalname for the teleprinter-subscriber telegraphy. Thesame year, a telex service was opened in Denmark, Ger-many and Switzerland using dedicated telegraph lines.

ISSN 0085-7130 © Telenor ASA 2004

Page 207: 100th Anniversary Issue: Perspectives in telecommunications

205Telektronikk 3.2004

After Statstelegrafen/Telegrafverket was opened in1855, the development went fast. The following year,a line to Sweden was opened, and from there, Den-mark and parts of the European continent could bereached. In a few years, lines were built from Oslo toBergen and Trondheim, and during the next decadelines to North Norway were ready. Lofoten were in aspecial position when it came to expanding the tele-graph in the north, because of the important fisheriesin the area. That is the reason why Lofoten alreadyin 1861 had a telegraph line, going from Sørvågen toBrettesnes (see also the article on the wireless tele-graph in this issue [4]). This line was connected to themain national network in 1868. In 1870, the line hadreached Norway’s eastern outpost, the town of Vardø.In 1867, the first sub sea cable between Norway andDenmark was opened (see later in this article).

Telegrafverket’s history has been covered on severaloccasions in Technical Information and Telektronikk.It is especially covered in 1955, on the occasion ofthe 100 years anniversary of Telegrafverket. Issue1-3 in 1955 contains a 24-page article by the editor,Julius Ringstad (Editor from 1942 to 1957) and hissuccessor Nils Taranger (Editor from 1957 to 1978).

Taranger later writes a retrospective article abouttelegraphy in Telektronikk, issue 3-4, 1963, in whichhe draws up the development of the telegraph untilthen. The spirit of the article is optimistic on behalf ofthe telegraph, and also competitive in the sense thathe promotes telegraphy to the expense of the tele-phone, which has had a rapid growth in the perioduntil 1962. In a sense, he foresees the need for datacommunications in the future, something in the end-notes he calls ‘a kind of telegraphy’.

Morse telegraphy was gradually superseded by thetelex system also in Norway, and the first manualtelex exchange was opened in Oslo in February 1946.Initially it had 20 numbers, which was increased to35 a month later. In 1946, there were 10 subscribersin Oslo, 4 in Bergen and 1 in Trondheim. The growthwas slow, and in 1953, there were only about 290subscribers in total. By the end of 1961, the numberhad grown to 1290.

In July 1957, the first automatic telex exchange wasopened in Oslo with a capacity of 500 numbers.Bergen followed in April 1961, and the Trondheimexchange was under installation in 1962.

In Norway, the telex network was closed on June 30,2000, however telex messages to and from the inter-national telex network can still be sent and receivedvia electronic mail. At the time of closing, the net-work had about 800 subscribers.

The first transatlantic telegraphcableThe following story is based on an article on cabletelegraphy, which appeared in Technical Informationin 1907. The author is unknown, as the rule was notto ascribe articles in the early volumes.

Already in 1842, Samuel Morse had tested a cable inNew York harbour across part of East River throughwhich he sent signals, and the following year he hadsuggested to the US government to establish an elec-tric connection between Europe and America.

In 1855, the American telegraph network had reachedNewfoundland, and on the European side, severalplaces on the Irish west coast also had telegraph ser-vice. Thus, the possibility of inter connection wasdiscussed on both sides of the Atlantic. Severe diffi-culties were reported, the shortest distance was 3,700km and the sea depth reached down to 5,500 m. Onewas only in doubt whether it was possible to transmittelegraph signals on such a long distance with a feasi-ble speed.

The first telegraph apparatus owned by the railwaycompany operating ‘Hovedbanen’ between Christia-nia and Eidsvoll was used on the section betweenChristiania and Strømmen/Lillestrøm. It was of thedouble-needle type. On the right there are two ‘nee-dles’, which moved right or left, according to theposition of the two transmission handles below. Whenthe handles were resting in the vertical position, theapparatus was in receiving mode (Illustration fromTelektronikk, issue 2, 1961, p 87)

The Atlantic Telegraph Co. was incorporated in October 1856. The original

capital was 300 shares at £ 1,000 each. Of this, Cyrus Field and John W. Brett

took 25 shares each. The capital was increased by £ 50,000 a few days later.

The total of £ 350,000 was secured by December 1856. At the end of 1858,

all the company’s assets had been spent, and with more liabilities than it could

meet [2].

ISSN 0085-7130 © Telenor ASA 2004

Page 208: 100th Anniversary Issue: Perspectives in telecommunications

206 Telektronikk 3.2004

Professor Charles Wheatstone4) had, in his labora-tory, measured the ‘speed of electricity’ to be300,000 km/s. In telegraph wires, the speed had beendetermined to 30,000 km/s and the speed on the sub-terranean and sub sea connection between Londonand Dublin was found to be much lower. In 1854,Charles Bright had estimated the speed on gutta-percha insulated cables to be less than 2,000 km/s.

However, the doubts were resolved by Edward O.W.Whitehouse5), who tested this by interconnecting sev-eral subterranean cables to form a length of approx.3,700 km and achieved a speed of 210 – 270 signalsper minute6). Morse himself participated in thesetests during autumn 1856. After having slept on it,he estimated that eight to ten words per minute, ortwenty messages an hour, could be sent betweenIreland and Newfoundland [2].

In 1856, John Watkins Brett, Charles Bright andCyrus W. Field7) founded the Atlantic Telegraph

Company (see frame). The necessary initial capitalcorresponding to 6 mill Norwegian kroner (NOK)8)

was raised in London, Liverpool and Glasgow. Thefollowing year it was raised to 8.5 mill and with aguarantee from the English State corresponding to250,000 NOK per year and available ships fordeployment.

The production of the cable started in 1857 with atime schedule of four months. The distance betweenthe landing points was determined to be 3,000 km,thus a cable length of 4,600 km was set. The Euro-pean landing point was the island of Valentia9), offthe Irish southwest coast, and Trinity Bay in New-foundland was chosen on the American side.

A lot of problems and improvisations were needed.For example, no housing was established on thebeach for the manufactured cable, and it had to lie inthe open exposed to the sun, which deteriorated theouter layers of the cable. This was probably an addi-tional factor which influenced the rapid failure of thecable (see later).

Two ships were used for the deployment, the BritishBattleship, H.M.S. ‘Agamemnon’ and the Americansteam frigate ‘Niagara’. With half of the cable onboard each ship, the idea was to start in the middleof the Atlantic Ocean and lay the cable towards eachside. For some reason, Whitehouse disturbed thisplan, and the deployment started in Ireland on August6, 1857. After several problems, the cable broke on4,200 m depth after 620 km had been laid. The shipshad to return to England where 1,300 km of newcable had to be manufactured.

On June 16, 1858, the deployment was started again,now from the mid Atlantic as originally planned. Inthis attempt, the cable broke three times, and they hadto start afresh again. More than 1000 km of cable waslost and more cable had to be manufactured. On the

The cable used consisted of 7 strands of 0.7 mmcopper wire covered with three layers of guttapercha. The outer diameter was just 10 mm. The corewas wrapped in jute yarn soaked with a compositionconsisting of 5/12 Stockholm tar, 5/12 pitch, 1/12boiled linseed oil and 1/12 common bees wax.Armouring consisted of 18 strands, each strandcomposed of 7 of the best charcoal iron wires, each0.7 mm. The finished cable was then dipped in aheated mix of tar, pitch and linseed oil (Illustrationfrom ‘Technical Information’, issue 1, 1907)

4) Sir Charles Wheatstone (1802–1875) is for electrical engineers best known for the ‘Wheatstone’s Bridge’, a method of doing preci-sion measurements of electrical resistance. However, he was heavily engaged in the telegraph from 1835, and together with partnerWilliam Cooke, laid the first experimental line between Euston terminus and Camden Town station of London and North WesternRailway on July 25, 1837. He had already in 1840 devised a method to connect Dover with Calais across the English Channel.

5) Edward O. Wildman Whitehouse was a retired medical doctor who had taken an interest in electricity and telegraph signalling. Hehad a practical approach to the problem and was later blamed for the failure of the cable by driving too much current through it. Hewas in conflict with another of the 17 directors of the project, Prof. William Thomson of Glasgow University, in 1892 to be knightedas Lord Kelvin for his achievements in science, among them the Atlantic telegraph cable project in 1866.

6) The article does not say what is meant by the term “signals per minute”, but the same numbers are given in [2].7) Cyrus West Field was an American entrepreneur from Massachusetts. By age 20, he was a partner in a paper manufacturing

company and at 33 (1852) he had retired from business as a wealthy man.8) At the time, the exchange rate between Norwegian kroner (NOK) and £ (GBP) was close to GBP = 20 NOK.9) Today, Valentia is best known as a resort island. The cable station from 1865 was closed in 1965, and is now partly a factory and

partly residential. In 2000 it was recognized as an IEEE Electrical Engineering Milestone, and a plaque is placed at the station.See: http://indigo.ie/~cguiney/valentia.html.

ISSN 0085-7130 © Telenor ASA 2004

Page 209: 100th Anniversary Issue: Perspectives in telecommunications

207Telektronikk 3.2004

third attempt, which started on July 29, things wentwell. The ‘Niagara’ landed in Newfoundland onAugust 5, and the ‘Agamemnon’ reached Ireland atthe same time after only one accident with the cable.Immediately after the cable ends had been broughtashore, the first signal was received. “The joy wasgreat in England and America, and great festivitieswere held in both countries.” Of course, congratula-tory telegrams were sent between Queen Victoria andU.S. President James Buchanan.

A major error was however soon discovered 550 kmfrom Valentia, and on the 1 September, the connec-tion was broken. In total, 732 telegrams had been sentthrough the cable. One of these is apparently said tohave saved the English government a sum corre-sponding to 900,000 NOK10).

The first transatlantic telegraph cable which workedreliably for a longer time was laid and opened on July27, 1866, this also between Valentia and Newfound-land; however the short-lived cable laid in 1858shows the feasibility of such a project. The articlefrom 1907 just briefly describes the 1866 cable.

Norway’s first sub sea cableIn the early years after the telegraph had come toNorway, international traffic had to go through Swe-den and Denmark, however the Danish-German Warin 1864 demonstrated the necessity for multiplechoices and redundancy. The first sub sea cable fromNorway was then opened on July 1, 1867. In 1866,a licence was given to an English company for thedeployment of a cable between Norway and Den-mark. Obviously, a conflict has existed towardsanother powerful telegraph company, which wantedto take over this licence as well as a licence for aDanish-English cable. Because of this conflict, thecable could not be manufactured in England. A laterlicensee was the cable manufacturing companyNewall & Johnson, and they were forced to use partsof an old cable, which had been used in Greek waters.The parts were sent to Arendal, on the south coast ofNorway, where Telegraph Director Carsten TankNielsen and Telegraph Commissary Jonas P.S.Collett11) evaluated the state of it. They could notapprove it; however, it was sent to Copenhagen and

examined by Professor Thomsen12), who found it tobe all right. The cable was then laid and opened onJuly 1, 1866.

Sub sea cables both for telegraph and telephone arethe subject of articles in Technical Information andTelektronikk on several occasions into the 1960s.Overview articles of different cables from Norwayto the continent as well as the British Isles are pub-lished, as well as technical tutorials for specific prob-lems.

A curiosity can be mentioned. In 1929, volume 9-10,a notice is printed under the signature ‘He’ aboutplans for a transatlantic telephone cable. It is a tran-script of the Journal of A.I.E.E13) and reports that theBell Telephone Laboratories in 1928 had completeda deep-sea cable, which could be feasible for atransatlantic telephone connection. It is also reportedthat one is undertaking work to develop a cable sys-tem ‘of this type’ to connect London and New YorkCity, and that the connection can possibly be made asearly as 1932. Obviously, it took longer, and whetherthat stems from technical, economic or political prob-lems shall remain unsaid. The first transatlantic tele-phone cable was completed in 1956, the TransAtlantic Cable No. 1 (TAT-1), between Oban on thecoast of Scotland and Clarenville, Newfoundland.

The British Battleship H.M.S. ‘Agamemnon’experienced rough weather on its trip from the midAtlantic until it reached the Irish coast on August 5,1858 (Illustration from ‘Technical Information’,issue 1, 1907, p 4)

10) According to Cookson [2] one significant message was about the news that the Sepoy Mutiny in India had been put down, whichhalted the mobilisation of two British regiments from Canada. Nine words had saved the British Government £ 60,000.

11) Carsten Tank Nielsen was the first Norwegian Telegraph Director from 1855 – 1892. Jonas P.S. Collett was temporary Director forthree and a half months in 1892 until Jonas Severin Rasmussen was appointed.

12) This could be Professor Julius Thomsen from the University of Copenhagen. He was a professor of chemistry from 1866, and he hasa merit in being the first to suggest the modern form of layout of the Periodic Table.

13) A.I.E.E. – American Institute of Electrical Engineers – is the predecessor of IEEE until this was formed in 1963.

ISSN 0085-7130 © Telenor ASA 2004

Page 210: 100th Anniversary Issue: Perspectives in telecommunications

208 Telektronikk 3.2004

References1 Lehne, P H. Enlightening 100 years of telecom

history – From ‘Technical Information’ in 1904 to‘Telektronikk’ in 2004. Telektronikk, 100 (3),3–9, 2004. (This issue)

2 Cookson, G. The Cable – The Wire that changedthe World. Gloucestershire, UK, Tempus Publish-ing Limited, 2003. ISBN 0 7524 2366 5.

3 Locust Grove – The Samuel F.B. Morse HistoricSite. September 22, 2004 [online] – URL:http://www.morsehistoricsite.org/history/morse.html

4 Lehne, P H. The second wireless in the world.Telektronikk, 100 (3), 209–213, 2004. (This issue)

Sources from Telektronikk’s archivesBrief overview of the Norwegian Telegraph’s techni-cal development during its 50 years of existence(Kortfattet oversigt over Telegrafvæsenets tekniskeudvikling under dets 50-aarige bestaaen). TechnicalInformation, 0 (8), 57–74. 1904.

Cable telegraphy (Kabeltelegrafi). Technical Infor-mation, 3 (1), 1–17, 1907.

Trans Atlantic telephone cable (Transatlantisk tele-fonkabel). Technical Information, 25 (9-10), 70, 1929.

Taranger, N, Ringstad, J. The telegraph (Telegrafen).Technical Information, 51 (1-3), 30–54, 1955.

The Norwegian Telegraph through 100 years (Tele-grafverket gjennom 100 år), Anniversary issue.Technical Information, 51 (1-3), 1955.

Saxegaard, L. The Norwegian State Railway’stelecommunications (Statsbanenes telekommu-nikasjoner). Telektronikk, 57 (2), 87–98, 1961.

Taranger, N. Characteristics of the developmenttowards modern telegraphy (Karakteristiske trekk vedutviklingen frem til moderne telegrafi). Telektronikk,59 (3-4), 160–173, 1963.

Other sourcesHistory of the Atlantic Cable & Submarine Telegra-phy. September 11, 2004 [online] – URL:http://www.atlantic-cable.com

Lindley, D. Degrees Kelvin. The Genius and Tragedyof William Thomson. London, UK, Aurum Press Ltd.,2004. ISBN 1 84513 000 6

Norwegian Telecom Museum (Norsk Telemuseum).September 22, 2004 [online] – URL:http://www.norsktele.museum.no/index_ieoff.html

Rafto, T. The history of the Norwegian State Tele-graph 1855–1955 (Telegrafverkets historie1855–1955). Telegrafverket, 1955.

For a presentation of the author, please turn to page 9.

ISSN 0085-7130 © Telenor ASA 2004

Page 211: 100th Anniversary Issue: Perspectives in telecommunications

209

IntroductionThe first Norwegian permanent wireless telegraphwas established in Lofoten in Northern Norwaybetween the fishing port of Sørvågen and the port onthe island of Røst. This happened in 1906, and thesame year, Hermod Petersen1), the first editor ofTechnical Information (Telektronikk, see [1]), writesan article about the project. Petersen was in charge ofthe project, thus in this case we can safely say thatthe story is told ‘in real time’ by the history makersthemselves. Articles from this time are not filled withpersonal details, but are very sober and based onfacts. However, it is possible to sense the time spiritof combined entrepreneurship and the conception oftaking part in a nation-building project.

This article reviews the story of establishing Nor-way’s first wireless telegraph in 1906, but before thatwe will shortly review Guglielmo Marconi’s experi-ments, which culminated in the demonstration of thetransatlantic wireless telegraph in 1901.

Marconi’s experimentsFrom a telecommunications point of view, the wire-less revolution started with the Italian scientist andengineer, Guglielmo Marconi’s experiment in 1901.On 12 December, Marconi received the first long-dis-tance radio transmission at Signal Hill, St. John’s,Newfoundland, 3,592 kilometres from the transmit-ter. This was situated at Poldhu in Cornwall on theAtlantic coast of England. Here, electrical engineerJohn Ambrose Fleming2) transmitted the Morse codesignal for ‘S’ from across the Atlantic Ocean in Eng-land and Marconi heard it – three short clicks –through a radio speaker. Strong winds at Signal Hill

made it difficult for Marconi and his assistants tolaunch the aerial on its kite, but at 12:30, 1:10 and2:20, he was able to pick up the three dots that werebeing transmitted from Poldhu.

It was believed at that time that wireless transmissionwas limited to a range of approximately 200 miles(320 km), due to the curving of the earth’s surface.When Marconi’s experiment proved otherwise, it wasa sensation. It proved conclusively that radiowavetransmission was not bounded by the horizon. Shortlyafter this, contemporary scientists Arthur Kennellyand Oliver Heaviside suggested the existence of alayer of ionised air in the upper atmosphere (the Ken-nelly-Heaviside layer, now called the E region of theionosphere) [2].

The size of the Poldhu station was tremendous. Theantenna system consisted of about 400 wires sus-

The second wireless in the world

P E R H . L E H N E

Per H Lehne

is Research

Scientist at

Telenor R&D

and Editor in

Chief of

Telektronikk

Telektronikk 3.2004

Norway was early in adopting the wireless telegraph. Already in 1899 the Telegraph Administration

had sent an engineer to study Marconi’s system. The motivation of the Telegraph Administration

was the communication needs of the large cod-fishing activity, which created a trade of substantial

national value. The fishing ports and islands were hard to reach without laying expensive cables,

and wireless communications were coming up as an attractive solution. The project of planning and

deploying what became the second wireless telegraph line in the world was told by the first editor of

Telektronikk the same year it happened, in 1906.

1) The article uses the Norwegian term “vedkommende ingeniør”, probably meaning the person who writes the story. Rafto [3]confirms that this was Hermod Petersen.

2) Sir John Ambrose Fleming was himself an inventor. In 1904, he demonstrates the vacuum tube diode, the ‘pre-amble’ to the radiovalve, which appeared in 1906.

Guglielmo Marconi (1874 – 1937) in the station atSignal Hill, Newfoundland in 1901

ISSN 0085-7130 © Telenor ASA 2004

Page 212: 100th Anniversary Issue: Perspectives in telecommunications

210 Telektronikk 3.2004

pended in an inverted cone from a 60 metre circleof twenty 60 metre high masts. This system wasdestroyed in a storm in 1901 and a temporary aerialwas constructed using two of the original masts fromdirectly opposite sides of the circle. This was theaerial configuration in place during the transatlanticexperiments in December 1901 [3].

In 1894 Marconi had begun to study the worksof Heinrich Hertz3). He started experiments on theapplication of Hertzian waves to the transmission andreception of messages over a distance, without wires,late the same year at the Villa Griffone at Pontecchio,Bologna, Italy; the family home.

He progressively increased the distance for transmis-sion and reception of signals: across a room, downthe length of a corridor, from the house and then intofields. In the early summer of 1895 Marconi achievedsignal transmission and reception over a distance ofabout 2 km, even when the line of sight was blockedby a hill. As long as visual contact was maintained,the waving of a handkerchief indicated success. Whenthat was not possible anymore, a gun had to be fired.

In 1897 he founded his own company, the “WirelessTelegraph and Signal Company”4), from which herisked £ 50,0005) for the 1901 experiment [3].

In 1909 Marconi, together with Karl Ferdinand Braunwas awarded the Nobel Prize in Physics “in recogni-tion of their contributions to the development of wire-less telegraphy” [5]. Many people were surprised andeven disappointed that Marconi had to share the prizewith his main competitor. Braun was the inventor ofthe cathode ray tube in 1897, and he was also thefounder of Telefunken.

Lofoten 1906In 1861, nine fishing villages had been connected toeach other by telegraph lines during the Lofoten fish-ing season, the outermost being Sørvågen. In 1868,the Lofoten Line was linked up to the main Norwe-gian network, and from 1873, Sørvågen TelegraphStation was open all year round [6]. The Lofotenisland of Røst was a centre point for the cod-fisheriesand a cable, if it were to be laid across Moskenes-straumen, was estimated to cost NOK 100,000 [7].

Already in 1899 the Telegraph Administration hadsent an engineer6) to study Marconi’s system, moti-vated by the communication needs of the large cod-fishing activity, which created a trade of substantialnational value. The fishing ports and islands werehard to reach without laying expensive cables, thusthe new wireless came up as a very attractive solu-tion. At this time, however, the evaluation was that

The first aerial erected for the transatlantic experiment at Poldhustation was a circular structure; however this system was destroyed ina storm before the experiment could take place. This temporary aerialwas then erected and used for Marconi’s experiment on 12 December1901 (Photo from T. Rafto [4])

Sørvågen – an early Norwegian hub in wireless communication [6]

1861 Norway’s first permanent ‘fishing’ telegraph. The ‘Lofoten-line’ was 170

km and was the country’s first telegraph line outside the main national

network. It connected nine fishing ports telegraphically.

1906 The first North European permanent wireless telegraph was estab-

lished between Sørvågen and Røst – the second wireless in the world.

1908 Norway’s first permanent ships telegraph. In 1906 and 1907, the staff

at Sørvågen tried intensely to serve Emperor Wilhelm II’s ship, the

‘Hohenzolleren’ in the best possible manner, however in vain. Contact

was not established. This was regarded a shame for the young nation

of Norway. On July 1, 1908, contact was established and we can

probably thank the German Emperor for Sørvågen telegraph station

being opened for communication with ships.

1928 Norway’s first permanent wireless telephony station in Sørvågen

communicated with another station at Lofotodden.

1946 Norway’s first permanent, non-military, ‘decimetre’ radio link connec-

tion was established between Sørvågen – Værøy and Sørvågen – Røst.

It used Telefunken equipment left behind by the Germans after WW II.

3) In 1887, Heinrich Hertz (1857–1894) did the first experiments to prove Maxwell’s theories right with the discoveries of the radio waves.4) In 1900, the company became Marconi’s Wireless Telegraph Company. The company later grew into a global communications and

information technology company which was renamed Marconi plc in 1999.5) £ 50,000 in 1901 corresponds roughly to £ 2.75 million today.6) Principal engineer (avdelingsingeniør) Rødland [4].

ISSN 0085-7130 © Telenor ASA 2004

Page 213: 100th Anniversary Issue: Perspectives in telecommunications

211Telektronikk 3.2004

the available equipment was not satisfactory, espe-cially when it came to reliability.

The first Norwegian experiments with wireless tele-graph were carried out during the summer of 1901 bythe Norwegian Navy, and the results were particu-larly rewarding. The equipment had improved signifi-cantly after Marconi started to face competition. In1905, the Norwegian Navy established coastal radiostations at Tjøme and Flekkerøy. These were latertaken over by the Telegraph Administration in 1910.

The same year as the Navy did their experiments,1901, the Telegraph Administration sent HermodPetersen up north to make inspections concerning the

possibility of establishing wireless telegraphy con-nections to the fishing ports of Røst, Værøy, Trænaand Grip. His examinations gave positive results forall cases. On March 21, 1902, the National Assemblygranted the sum of NOK 15,000 to the TelegraphAdministration to perform tests between the Lofotenfishing ports of Røst, Værøy and Sørvågen. Severalcompanies made their equipment available for thetests, however not Marconi’s company. The testswere performed during the autumn 1903. The testsconfirmed the feasibility and that the German equip-ment from “Gesellschaft für drahtlose Telegraphie,system Telefunken”, Marconi’s main competitor (seeabove), satisfied the demands.

The picture shows Sørvågen Radio in 1906. Originally, both Sørvågen and Røst had 50 m radio-towers made ofspliced spruce. It was secured by 28 wires in seven groups. The masts were approximately 40 m from the telegraphbuilding. The upper part of the mast in Sørvågen was destroyed by storms several times, and the height wasreduced to 40 m. (Photo from T. Rafto [4])

ISSN 0085-7130 © Telenor ASA 2004

Page 214: 100th Anniversary Issue: Perspectives in telecommunications

212 Telektronikk 3.2004

The Sørvågen – Røst system operated at a wave-length of 520 m (approx. 580 kHz). The experiencesso far had in general concluded that the ‘short wave’was more advantageous; however Petersen concludedthat on the connection between Sørvågen and Røst,the ‘long wave’ was better, supposedly due to theinfluence of the Lofoten Mountains.

At this time, the Telegraph Administration decidedthat the time had come to establish the connectionbetween Sørvågen and Røst. Værøy was to be in-cluded after some experience had been collected.However, the local authorities at Værøy and Røst didnot want to give a guarantee to the Telegraph Admin-istration for “free services”, and the construction wasfor the time given up. In 1905 the National Assemblygranted the sum of NOK 22,000 for the establishing,provided the local authorities paid their share. Soonafter, the county administration in Nordland tookover the guarantee, and the case was closed.

The equipment was ordered from Germany and theycame to start the installations on December 8, 1905.

The installation was finished on February 26, 1906and opened for test the following day. On May 1,1906, the connection between Sørvågen and Røst wasopened, a distance of 59 km. The first North Euro-pean permanent wireless telegraph also became thesecond wireless telegraph connection in the worldthat was permanently connected to the national tele-graph network.

Hermod Petersen actually remarks that it is the sec-ond in the world (“As far as it is known”) and that theworld’s first wireless telegraph line was opened in thespring 1905 between Saint Cataldo near Bari in Italyand Antivari7) in Montenegro, a distance slightlymore than 200 km. It is interesting to note that thisis also printed in a short notice already in volume 5,1904, an indication that the sheets were both editedand printed with some delay. Since the Sørvågen-Røst article appears already in issue 2, 1906, thiscould not have been edited nor printed until the endof the year.

The wireless telegraph equipment used in the Sørvågen – Røst connection established in 1906 is com-prehensively described in ‘Technical Information’ the same year. This is a simplified diagram of the completeequipment, both transmitter and receiver (Illustration from ‘Technical Information’, issue 2, 1906, p 30)

7) Antivari is now the town of Bar, south of Dubrovnik.

ISSN 0085-7130 © Telenor ASA 2004

Page 215: 100th Anniversary Issue: Perspectives in telecommunications

213Telektronikk 3.2004

A short biography of Petersen is given in an articleby Henrik Jørgensen in Telektronikk, issue 4, 1995.Petersen was a radio expert of international reputeand in addition to his pioneering work at the Røst –Sørvågen connection, he was also the head of instal-lation when the world’s first Arctic radio stationopened in Spitsbergen, Svalbard, in 1911. In 1917and 1918, Petersen writes a series of 12 articles as atutorial on the wireless telegraph where theories andprinciples are described comprehensively. This fillsthe volumes of Technical Information for a year.

Wireless telegraphy based on Morse was operatedway up into the 1960s when these were migrated intotelex systems, but telephone had to enter the wirelessarea sooner or later. In Norway, the first permanentradio telephone connection was opened in 1920 be-tween Grip and Kristiansund.

References1 Lehne, P H. Enlightening 100 years of telecom

history – From ‘Technical Information’ in 1904 to‘Telektronikk’ in 2004. Telektronikk, 100 (3),3–9, 2004 (This issue.)

2 Bedi, J E. Guglielmo Marconi. In: The Froehlich/Kent Encyclopedia of Telecommunications. NY.USA, Marcel Dekker, 10, 361–368, 1995.

3 Marconi Calling – Interactive web museum.September 22, 2004 [online] – URL:http://www.marconicalling.com

4 Rafto, T. The history of the Norwegian State Tele-graph 1855–1955 (Telegrafverkets historie1855–1955). Oslo, Telegrafverket, 1955.

5 Nobelprize.org – The Nobel Prize in Physics1909. September 22, 2004 [online] – URL:http://nobelprize.org/physics/laureates/1909/index.html

6 Norwegian Telecom Museum, Sørvågen (NorskTelemuseum, Sørvågen). September 27, 2004.[online] – URL: http://www.lofoten-info.no/telemuseum/

7 Norwegian Telecom Museum (Norsk Telemu-seum). September 22, 2004. [online] – URL:http://www.norsktele.museum.no/index_ieoff.html

Sources from Telektronikk’s archivesPetersen, H. The wireless telegraph installation Røst– Sørvågen (Det trådløse telegrafanlæg Røst – Sør-vågen). Technical Information, 2 (2), 23–39, 1906.

Petersen, H. Radio telegraphy (Radiotelegrafi).Technical Information, 13 (8–12), 57–90, 1917 and14 (1–8), 1–61, 1918.

Birkeland, A M. The radio telephony station at Grip(Grip Radio-Telefonstasjon). Technical Information,15 (9–11), 69–96. 1919.

Other sourcesSimons, R W. Guglielmo Marconi and Early Systemsof Wireless Communications. GEC review, 11 (1),37–55, 1996.

Marconi system from 1917 (Illustration from ‘Technical Information’,issue 11, 1917, p77)

For a presentation of the author, turn to page 9.

ISSN 0085-7130 © Telenor ASA 2004

Page 216: 100th Anniversary Issue: Perspectives in telecommunications

214 Telektronikk 3.2004

2GSecond Refers to the family of digital cellular tele-Generation phone systems standardised in the 1980s and(mobile introduced in the 1990s. They introducednetwork) digital technology and carry both voice and

data conversation. CDMA, TDMA and GSMare examples of 2G mobile networks [1].

3GThird The generic term for the next generation ofGeneration wireless mobile communications networks(mobile supporting enhanced services like multimedianetwork) and video. Most commonly, 3G networks

are discussed as graceful enhancements of2G cellular standards, like e.g. GSM. Theenhancements include larger bandwidth, moresophisticated compression techniques, and theinclusion of in-building systems. 3G networkswill carry data at 144 kb/s, or up to 2 Mb/sfrom fixed locations. 3G will standardizemutually incompatible standards: UMTSFDD and TDD, CDMA2000, TD-CDMA [1].

4GFourth A term used for future wireless systemsGeneration providing broadband mobile access, with(mobile high mobility and bit rates up to 100 Mb/s.network)

ARQAutomatic A standard method of checking transmittedRepeat data used on high-speed data communicationsreQuest systems. The sender encodes an error-

detection field based on the contents of themessage. The receiver recalculates the checkfield and compares it with the received one.If they match an ‘ACK’ (acknowledgement)is transmitted to the sender. If they do notmatch, a ‘NAK’ (negative acknowledgement)is returned, and the sender retransmits themessage [1].

ATMAsyn- A high bandwidth, low-delay, connection-chronous oriented, packet-like switching and multiplex-Transfer ing technique. ATM allocates bandwidth onMode demand, making it suitable for high-speed

connections of voice, data and video services.Access speeds are up to 622 Mb/s and back-bone networks currently operate at speeds ashigh as 2.5 Gb/s. Standardised by ITU-T [1].

Availability Dependability with respect to readiness forusage; Measure of correct service deliverywith respect to the alternation betweencorrect and incorrect service [2].

AWGNAdditive Usually used to denote thermal noise in aWhite communications system. The noise has un-Gaussian limited bandwidth with constant noise spectralNoise density (white). The noise is uncorrelated with

the useful signal, and can be added on a powerbasis to determine resulting noise + signalpower. The amplitude of the noise is stochas-tic, Gaussian distributed.

B3GBeyond 3rd A term used for the future wireless systemsGeneration providing broadband mobile access, with high

mobility and bit rates up to 100 Mb/s. It dif-fers from the 4G term in that B3G is mainlyused to describe a future integration of amultitude of wireless standards.

BERBit Error The ratio of error bits to the total number ofRate bits transmitted.

CCIRComité The International Radio ConsultativeConsultatif Committee of ITU, called ITU-R asInternational from 1992. http://www.itu.intdes RadioCommuni-cations

CDMACode A digital, spread spectrum, packet basedDivision access technique generally used in radioMultiple systems. CDMA is used in certain cellularAccess systems, like e.g. IS-95, and is also in 3G

systems like UMTS

Terms and acronyms

ISSN 0085-7130 © Telenor ASA 2004

Page 217: 100th Anniversary Issue: Perspectives in telecommunications

215Telektronikk 3.2004

CELTICCooperation CELTIC is a five year EUREKA clusterfor a project, which started work in NovemberEuropean 2003. The initiative is supported by most ofsustained the major European players in communicationLeadership in technologies. The main goal of CELTIC isTelecommu- to maintain European competitiveness innications telecommunications through collaborative

R&D. CELTIC projects are characterisedby a holistic approach to telecoms networks,applications, and services. CELTIC is opento any kind of project participants from allEUREKA countries.http://www.celtic-initiative.org/

CEPTConférence The European Conference of Postal andEuropéenne Telecommunications Administrations – CEPTdes Admini- – was established in 1959 by 19 countries,strations des which expanded to 26 during its first tenPostes et des years. Original members were the incumbentTélécom- monopoly-holding postal and telecommuni-munications cations administrations. CEPT’s activities

included co-operation on commercial, opera-tional, regulatory and technical standardisationissues.In 1988 CEPT created ETSI, The EuropeanTelecommunications Standards Institute, intowhich all its telecommunication standardisa-tion activities were transferred.In 1992 the postal and telecommunicationsoperators created their own organisations, PostEurope and ETNO respectively. In conjunc-tion with the European policy of separatingpostal and telecommunications operationsfrom policy-making and regulatory functions,CEPT became a body of policy-makers andregulators. At the same time, Central andEastern European Countries became eligiblefor membership of CEPT. With its 45 mem-bers CEPT now covers almost the entire geo-graphical area of Europe. http://www.cept.org/

Confiden- Absence of unauthorized disclosure oftiality information [3].

CORBACommon A standard defined by the Common ObjectObject Group. It is a framework that provides inter-Request operability between objects built in differentBroker programming languages, running on differentArchitecture physical machines perhaps on different

networks. CORBA specifies an InterfaceDefinition Language, and API (ApplicationProgramming Interface) that allows client/server interaction with the ORB (ObjectRequest Broker).

COTSCommercial A product that is used “as-is”. COTS productsOff The are designed to be easily installed and to inter-Shelf operate with existing system components.

CRCCyclic A process used to check the integrity of aRedundancy block of data. A CRC character is generated atCheck the transmission ends. Its value depends on the

hexadecimal value of the number of binary‘ones’ in the data block.

CSIChannel Near instantaneous information about aState transmission channel’s properties. MostlyInformation used in radio communications, especially

applied to smart antennas and MIMO systems.CSI may contain data about e.g. signal loss,frequency selective variations and multipathconditions.

CSNRChannel The ratio between signal level and noise levelSignal-to- on a transmission channel.Noise Ratio

DABDigital Terrestrial digital radio system, also calledAudio T-DAB developed by the EUREKA 147Broadcasting Project in the period 1986–1993. It offers near

CD-quality sound, more stations and addi-tional radio and data services. Standardizedby ETSI and recognized by the ITU in 1994.Regular broadcast in several European coun-tries. Operates in the frequency bands47–68 MHz, 87.5–108 MHz, 174–240 MHzand 1452– 1479.5 MHz. First transmissionsfrom 1995. Intended to replace conventionalFM broadcast.http://www.worlddab.org,http://www.eurekadab.org

dBDecibel One tenth of a Bel, where ‘Bel’ refers to

Alexander Graham Bell. A unit of measureof signal strength, usually the relation betweena transmitted signal and a standard signalsource. Decibel was invented by the BellSystem to express gain or loss in telephonetransmission systems. Decibel is a logarithmicunit, which defines the level of gain or loss insignal strength in a circuit or link, or devicecompared to a reference value.

[dB] = 10 log10 p / pref

Where p denotes the actual power level, andpref denotes the reference power.

ISSN 0085-7130 © Telenor ASA 2004

Page 218: 100th Anniversary Issue: Perspectives in telecommunications

216 Telektronikk 3.2004

Depend- The ability to deliver a service that can justi-ability fiably be trusted. The service delivered by a

system is its behaviour, as it is perceived byits user(s); a user is another system (physical,human) that interacts with the former at theservice interface. The function of a systemis what the system is intended for, and isdescribed by the system specification [2].

DFTDiscrete Discrete representation of the FourierFourier transform. The Fourier transform is a linearTransform transformation which can be used to convert

signals and signal representations betweenthe time- and frequency domain by the use ofharmonic functions. The function used in Fis the complex exponential function, ej2πf.

DVBDigital An international digital broadcast standard forVideo TV, audio and data. DVB can be broadcastBroadcasting via satellite, cable or terrestrial systems. It has

been initially used in Europe and the Far East.http://www.dvb.org/

DVB-TDigital The terrestrial version of DVB.VideoBroadcasting– Terrestrial

ECNExplicit A congestion avoidance scheme that usesCongestion marking packets instead of dropping them inNotification the case of incipient congestion. The receivers

of marked packets should return the informa-tion about marked packets to the senders, andthe senders should decrease their transmit rate.To avoid heavy congestion, routers markpackets with probability depending on anaverage queue length. IETF RFC 3186.http://www.ietf.org

EGPRSEnhanced GPRS is an enhancement to the GSM mobileGeneral communications system that supports dataPacket packets. GPRS enables continuous flows ofRadio IP data packets over the system for suchService applications as Web browsing and file trans-

fer. Enhanced GPRS (General Packet RadioService) uses the modulation technique8PSK (8 Phase Shift Keying) to increase theachievable user data rate for operation onEDGE networks. http://www.etsi.org

ELABElektronikk- The Electronics Laboratory at The Norwegianlaboratoriet Institute of Technology. Research Institute.ved Norges Now part of SINTEF. http://www.sintef.noTekniskeHøgskole

End-to-end Reestablishment of information flows in areestablish- network after a failure by establishing newment end-to-end virtual or physical paths, connec-

tions, or routing schemes for (at least) theaffected flows. Also denoted “End-to-endrestoration” and “Global repair”.

ERMESEuropean ETSI standard for European wide area pagingRadio operating in the VHF band. Ready in 1993,Message but was not deployed in any substantialSystem manner due to the success of GSM SMS.

Error Part of a system state which is liable to lead tofailure; manifestation of a fault in a system [2].

ESS 1ElectronicSwitchingSystem no. 1

ETSIEuropean A non-profit membership organizationTelecommu- founded in 1988. The aim is to producenications telecommunications standards to be usedStandards throughout Europe. The efforts are co-Institute ordinated with the ITU. Membership is open

to any European organization proving aninterest in promoting European standards.It was e.g. responsible for the making of theGSM standard. The headquarters are situatedin Sophia Antipolis, France.http://www.etsi.org

Failure Deviation of the delivered service from com-pliance with the specification; Transition fromcorrect service delivery to incorrect servicedelivery [2].

Fault Adjudged or hypothesized cause of an error;Error cause which is intended to be toleratedor avoided; Consequence for a system of thefailure of another system which has interactedor is interacting with the considered system[2].

ISSN 0085-7130 © Telenor ASA 2004

Page 219: 100th Anniversary Issue: Perspectives in telecommunications

217Telektronikk 3.2004

FECForward A technique of error detection and correctionError in which a transmitting host computer includesCorrection a number of redundant bits in the payload

(data field) of a block or frame of data. Thereceiving device uses the extra bits to detect,isolate and correct any errors created in trans-mission.

FFTFast Fourier FFTs are fast DFT Algorithms.Transform

Flooding Self-healing achieved by flooding the networkwith requests in a search for available capacityto replace that of a failed link.

Gb/sGigabits Binary term denoting digital data rate ofper second 1,0243 = 1,073,741,824 bits per second.

GMDSSThe Global The GMDSS became fully effective onMaritime 1 February 1999 and is a worldwide networkDistress of automated emergency communications forand Safety ships at sea. All oceangoing passenger shipsSystem and cargo ships of 300 gross tonnage and

upwards must be equipped with radio equip-ment that conforms to international standardsas set out in the system. The basic concept isthat search and rescue authorities ashore, aswell as shipping in the immediate vicinity ofthe ship in distress, will be rapidly alertedthrough satellite and terrestrial communicationtechniques to a distress incident so that theycan assist in a co-ordinated SAR (Search AndRescue) operation with the minimum of delay.http://www.imo.org

GMPLSGeneralized Generalized Multi Protocol Label SwitchingMulti- is applied to MPLS signalling that is used notProtocol only to support packet based paths but otherLabel technologies such as Optical MUX (Multi-Switching plexer), ATM and Frame Relay switches.

GPRSGeneral An enhancement to the GSM mobile commu-Packet nications system that supports data packets.Radio GPRS enables continuous flows of IP dataService packets over the system for such applications

as Web browsing and file transfer. Supportsup to 160 kb/s gross transfer rate. Practicalrates are from 12–48 kb/s. http://www.etsi.org

GSMGlobal A digital cellular phone technology that is theSystem predominant system in Europe, but is alsofor Mobile used around the world. Developed in theCommuni- 1980s by CEPT and ETSI. GSM was firstcations deployed in seven European countries in 1992.

It is operating in the 900 MHz and 1.8 GHzbands in Europe and the 1.9 GHz PCS band inNorth America. GSM defines the entire cellu-lar system, from the air interface to the net-work nodes and protocols. As of January2004, there were more than 1 billion GSMusers in more than 200 countries worldwide.http://www.gsmworld.com/,http://www.etsi.org

HDRHigh DataRate

HPAHigh PowerAmplifier

HWHardware

IA5International Now International Reference Alphabet No. 5Alphabet (IRA5) and previously International Alphabetno 5 No. 5 (ISO 646) is defined in ITU-T T.50 and

is a 7-bit character code. There are multiplenational versions.http://www.iso.org, http://www.itu.int

ICTInformationand Commu-nication Technologies

IDFTInverse The inverse linear transformation of theDiscrete Discrete Fourier Transform (DFT).FourierTransform

IFFTInverse Fast The inverse linear transformation of the FastFourier Fourier Transform (FFT).Transform

ISSN 0085-7130 © Telenor ASA 2004

Page 220: 100th Anniversary Issue: Perspectives in telecommunications

218 Telektronikk 3.2004

IFIPInternational A non-governmental, non-profit umbrellaFederation organization for national societies working infor the field of information processing. It wasInformation established in 1960 under the auspices ofProcessing UNESCO as an aftermath of the first World

Computer Congress held in Paris in 1959.Today, IFIP has several types of Members andmaintains friendly connections to specializedagencies of the UN system and non-govern-mental organizations. Technical work, whichis the heart of IFIP’s activity, is managed by aseries of Technical Committees and WorkingGroups. http://www.ifip.or.at

IMOThe Inter- Established in 1948 at an international con-national ference in Geneva.The original name was theMaritime Inter-Governmental Maritime ConsultativeOrganization Organization, or IMCO, but the name was

changed in 1982 to IMO. The IMO Conven-tion entered into force in 1958 and the newOrganization met for the first time the follow-ing year. The purposes of the Organization, assummarized by Article 1(a) of the Convention,are “to provide machinery for cooperationamong Governments in the field of govern-mental regulation and practices relating totechnical matters of all kinds affecting ship-ping engaged in international trade; to encour-age and facilitate the general adoption of thehighest practicable standards in matters con-cerning maritime safety, efficiency of naviga-tion and prevention and control of marine pol-lution from ships”. The Organization is alsoempowered to deal with administrative andlegal matters related to these purposes.http://www.imo.org

IMSOThe Inter- The organizational part of formernational INMARSAT.MobileSatelliteOrganization

IMT-2000International The global standard for third generation (3G)Mobile wireless communications, defined by a set ofTelecommu- interdependent ITU Recommendations.nications IMT-2000 provides a framework for world-2000 wide wireless access by linking the diverse

systems of terrestrial and/or satellite basednetworks. It will exploit the potential synergybetween digital mobile telecommunicationstechnologies and systems for fixed and mobilewireless access systems. http://www.itu.int

INMARSATThe Inter- The international organization formed in 1979national responsible for the international maritimeMaritime satellite communications system. ChangedSatellite name to IMSO in 1994, and the operationsOrganization were privatized to Inmarsat Ltd in 1999.

http://www.inmarsat.com

Integrity Absence of improper system state alterations[3].

INTELSATThe Inter- Established August 20, 1964 on the basisnational of agreements signed by governments andSatellite operating entities. It established the firstOrganization commercial global satellite communications

system and changed the way the worldconnects. http://www.intelsat.com

IPInternet A protocol for communication betweenProtocol computers, used as a standard for transmitting

data over networks and as the basis for stan-dard Internet protocols. http://www.ietf.org

IPOInitial Public The first sale of stock by a company to theOffering public.

ISDNIntegrated A digital telecommunications network thatServices provides end-to-end digital connectivity toDigital support a wide range of services, includingNetwork voice and non-voice services, to which users

have access by a limited set of standard multi-purpose user-network interfaces. The user isoffered one or more 64 kb/s channels.http://www.itu.int

ISIInter-Symbol The interference between adjacent pulses of aInterference transmitted code.

ITUThe Inter- On May 17, 1865, the first Internationalnational Telegraph Convention was signed in ParisTelecom- by the 20 founding members, and the Inter-munication national Telegraph Union (ITU) wasUnion established to facilitate subsequent amend-

ments to this initial agreement. It changedname to the International TelecommunicationsUnion in 1934. From 1948 a UN body withapprox. 200 member countries. It is the topforum for discussion and management oftechnical and administrative aspects of inter-national telecommunications. http://www.itu.int

ISSN 0085-7130 © Telenor ASA 2004

Page 221: 100th Anniversary Issue: Perspectives in telecommunications

219Telektronikk 3.2004

Jgroup/ARMJava Group/ A specific Java based group communicationAutonomous system with Autonomous ReplicationReplication Management.Management

LANLocal Area A network shared by communicating devices,Network usually in a small geographical area.

Layering System architecture approach where the totalfunctionality of the system is divided into lay-ers, each layer providing a well defined subsetof the functionalities and where a layer usesthe services from the layer(s) below. E.g. ISO-OSI model and transport networks. Typically,lower layers deal with issues of the physicalenvironment, higher layers provide functionswith increasing abstraction, and the system’sservices are provided by the top layer.

LDPCLow Density Binary error correcting codes defined in termsParity Check of low density parity check matrices. Defined

by R. Gallagher in 1962, and rediscovered in1996 by D. MacKay and R. Neal. Currentinterest is in the use in wireless communica-tions and inter-planetary communication.

LOSLine of Sight This term is often associated with radio trans-

mission systems indicating there is a clear pathbetween the transmitter and receiver.

MACMedium The lower of the two sub layers of the DataAccess Link Layer. In general terms, MAC handlesControl access to a shared medium, and can be found

within many different technologies. For exam-ple, MAC methodologies are employed withinEthernet, GPRS, and UMTS

MADMobility,Adaptivityand Depend-ability

Maintain- A system’s ability to undergo repairs andability modifications [3]. Measure of the time needed

to rectify failures for a given O&M environ-ment.

MAPMobile A protocol that enables real time communi-Application cation between nodes in a mobile cellularPart network. A typical usage of the MAP protocol

would be for the transfer of location informa-tion from the VLR (Visitor Location Register)to the HLR (Home Location Register).

MARISATMaritime A pre-INMARSAT maritime satellite system,Communi- established by the USA in 1976. Now part ofcations Boeing Satellite Systems Inc. Satellite http://www.boeing.com/satellite

Mb/sMegabits Binary term denoting digital data rate ofper second 1,0242 = 1,048,576 bits per second.

MBMSMultimedia A broadcast/multicast service defined forBroadcast/ UMTS.MulticastService

MIMOMultiple An arbitrary wireless communication systemInput in which the transmitting end as well as theMultiple receiving end is equipped with multipleOutput antenna elements.

MPLSMulti- MPLS has been developed to speed up theProtocol transmission of IP based communications overLabel ATM networks. The system works by addingSwitching a much smaller “label” to a stream of IP

datagrams allowing “MPLS” enabled ATMswitches to examine and switch the packetmuch faster.

MSCMobile A Mobile Switching Centre is a telecommuni-Switching cation switch or exchange within a cellularCentre network architecture which is capable of inter-

working with location databases.

NECNippon Japanese electronics company, manufacturerElectrical of i.a. satellite earth stations.Company http://www.nec.com/

NTANorwegian Norwegian ‘Teledirektoratet’. The name andTelecommu- organization of Televerket’s (Telenor’s)nications administrative headquarters in Oslo 1969–1994.Admi- http://www.telenor.comnistration

ISSN 0085-7130 © Telenor ASA 2004

Page 222: 100th Anniversary Issue: Perspectives in telecommunications

220 Telektronikk 3.2004

NTNFThe Royal The predecessor of the Norwegian ResearchNorwegian Council (NFR) (1946–93).Council for http://www.forskningsradet.no/Scientific andIndustrialResearch

O&MOperation and The processes and functions used in managingMaintenance a network or element within a network.

OFDMOrthogonal A spread spectrum technique that distributesFrequency the data over a large number of carriers spacedDivision apart at precise frequencies. This spacingMultiplexing provides the “orthogonality” in this technique,

which prevents the demodulators from seeingfrequencies other than their own. The benefitsof OFDM are high spectral efficiency,resiliency to RF interference, and lower multi-path distortion. This is useful because in atypical terrestrial wireless scenario there aremultipath-channels (i.e. the transmitted signalarrives at the receiver using various paths ofdifferent lengths). Since multiple versions ofthe signal interfere with each other (intersymbol interference (ISI)) it becomes veryhard to extract the original information.OFDM is sometimes called multi-carrier ordiscrete multi-tone modulation. It is the modu-lation technique used for digital TV in Europe,Japan and Australia. It is used in DAB, ADSLand WLAN 802.11a and g and WMAN 802.16standards.

OSIOpen Refers to the 7 layer reference modelSystems developed by the ISO. The reference modelInter- breaks communication functions down intoconnection one of seven layers, each layer providing

clearly defined services to adjacent layers.They are often referred to as Layer 1 throughto 7:1 Physical layer2 Data Link Layer3 Network layer4 Transport layer5 Session layer6 Presentation layer7 Application layer

OxSOptical x x ∈ {C, B, P}and C: circuit, B: burst andSwitching P: packet

PANPersonal A group of personal devices, communicatingArea seamlessly together. This is also referred to asNetwork a WPAN (Wireless Personal Area Network),

an example being the Bluetooth™ system.

PAPRPeak-to- Also known as the ‘crest factor’. The ratio Average between the peak and average power levelPower applied to a carrier frequency.Ratio

PLMNPublic Land Common notation in the 80s of a land mobileMobile network of any category that was used to offerNetwork public services.

PMRPrivate Radio communication systems used by smallMobile to medium sized groups of users.Radio

POTSPlain Old A very general term used to describe anTelephone ordinary voice telephone service.Service See also PSTN.

Protection Techniques to make networks fault tolerantwhere dedicated spare path is establishedbetween the end nodes of the protected pathor sub-path.

PSTNPublic Common notation for the conventionalSwitched analogue telephone network.TelephoneNetwork

PTTPost and Common notation for the postal and telecomTelecommu- regulatory authority in a country. Previouslynication common term for the national postal andoperator/ telecom operator.authority

QoSQuality of The “degree of conformance of the serviceService delivered to a user by a provider, with an

agreement between them”. The agreementis related to the provision/delivery of thisservice. Defined by EURESCOM projectP806 in 1999 and adopted by ITU-T inrecommendation E.860 [4].http://www.itu.int, http://www.eurescom.de

ISSN 0085-7130 © Telenor ASA 2004

Page 223: 100th Anniversary Issue: Perspectives in telecommunications

221Telektronikk 3.2004

Reconfigu- Techniques to make networks fault tolerantration based upon a centralized management, where

a network management system supervises andcontrols all network resources, and reconfig-ures the paths in the network to maintain thetraffic flows when a network element fails.

Reliability Dependability with respect to continuityof service. Measure of continuous correctservice. Measure of time to failure [2].

Rerouting Self healing achieved by letting the traffichandling mechanisms of the network and theuser equipment re-establish individual packetflows/connections after they have beendisrupted by a network failure, e.g. Internet(re)routing.

RFRadio Any frequency within the electromagneticFrequency spectrum normally associated with radio

wave propagation.

RLPRadio Link Radio link protocol of the circuit switchedProtocol data services in GSM.

RSCRecursive,Systematic,Convolu-tional

RxReceiver The terminator of any signal on a transmission

medium.

SACCHSlow A GSM signalling channel that provides aAssociated relatively slow signalling connection. TheControl SACCH is associated with either a traffic orChannel dedicated channel. The SACCH can also be

used to transfer SMS (Short Message Service)messages if associated with a traffic channel.

Safety Dependability with respect to the non occur-rence of catastrophic failures. Measure ofcontinuous delivery of either correct serviceor incorrect service after a benign failure.Measure of the time to catastrophic failure [2].

SCService The node catering for the store-and-forwardCentre and service capabilities of SMS.

SDCCHStand-Alone This channel is used in the GSM system toDedicated provide a reliable connection for signallingControl and SMS (Short Message Service) messages.Channel The SACCH (Slow Associated Control

Channel) is used to support this channel.

SDHSynchronous A standard technology for synchronousDigital data transmission on optical media. It is theHierarchy international equivalent of the North American

SONET (Synchronous Optical Network). It isa method of transmitting digital informationwhere the data is packed in containers thatare synchronized in time enabling relativelysimple modulation and demodulation at thetransmitting and receiving ends. The techniqueis used to carry high capacity information overlong distances.

Self-healing A common denominator for several techniquesto make networks fault tolerant which havedistributed control and require no dedicatedpre-reserved transmission capacity.

SINRSignal-to- The power ratio between the useful signalInterference level (C) and the sum of interference signals+Noise (I) and thermal noise level (N). OftenRatio expressed in dB.

SINR = C / (I + N)

SISOSingle Radio transmission system including a singleInput transmitter antenna and a single receiverSingle antenna.Output

SMEShort Any type of device that may act as anMessage originator or a recipient of a short messageEntity in GSM.

SMSShort A means by which short messages can beMessage sent to and from digital cellular phones, pagersService and other handheld devices. Alphanumeric

messages of up to 160 characters can besupported [1].

SNRSignal-to- The power ratio between the useful signalNoise Ratio level (C) and the thermal noise level (N).

Often expressed in dB.

SNR = C / N

ISSN 0085-7130 © Telenor ASA 2004

Page 224: 100th Anniversary Issue: Perspectives in telecommunications

222 Telektronikk 3.2004

SOLASInternational The SOLAS Convention in its successiveConvention forms is generally regarded as the mostfor the important of all international treaties con-Safety of cerning the safety of merchant ships. The firstLife at Sea version was adopted in 1914 in response to the

Titanic disaster, the second in 1929, the thirdin 1948 and the fourth in 1960. The 1960Convention was adopted on 17 June 1960 andentered into force on 26 May 1965. It was thefirst major task for IMO after the Organiza-tion’s creation and it represented a consider-able step forward in modernizing regulationsand in keeping pace with technical develop-ments in the shipping industry. The mainobjective of the SOLAS Convention is tospecify minimum standards for the construc-tion, equipment and operation of ships, com-patible with their safety. http://www.imo.org

Span re- Reestablishment of affected information flowsestablish- in a network after a failure by replacing thement part of a virtual or physical path or connection

affected by the failure, by new sub-paths orconnections. Also denoted “Span restoration”and “Local repair”.

SPCStored Common in all modern telephone switches.Program Stored software (program) controls theControl computer or microprocessor, which in turn

controls the operation of the switch [1].

SS no 7Signalling A CCS (Common Channel Signalling) systemSystem no 7 defined by the ITU-T. SS7 is used in many

modern telecom networks and provides a suiteof protocols that enables circuit and non-circuit related information to be routed aboutand between networks. The main protocolsinclude MTP (Message Transfer Part), SCCP(Signalling Connection Control Part) andISUP (ISDN User Part). http://www.itu.int

STBCSpace-Time Special case of Space Time Codes (STC).Block Code STC schemes use a number of code symbols

equal to the number of Tx antennas in MIMO.These are generated and transmitted simul-taneously, one symbol from each antenna.The symbols are generated by a space-timeencoder such that by using an appropriate sig-nal processing and decoding procedure at thereceiver, the diversity gain and/or the codinggain is maximized. STBC uses block codesand was described by S. Alamouti in 1998.

STSKSkandinavisk The Scandinavian Tele-satellite Committee,Telesatelitt- established in 1964.komité

SWSoftware Term denoting code and programs not

hardwired into the equipment, but givinginstructions and data for a computer’s central processing unit.

TCAPTransactions Transaction Capabilities Application PartCapabilities enables the deployment of advancedApplication intelligence in the network by supportingPart non circuit related information exchange

between SP (Signalling Point) using theSCCP (Signalling Connection Control Part)connectionless service.

TCPTransport Transport layer protocol defined for theControl Internet by Vint Cerf and Bob Kahn in 1974.Protocol A reliable octet streaming protocol used by

the majority of applications on the Internet. Itprovides a connection-oriented, full-duplex,point-to-point service between hosts.http://www.ietf.org

TINAThe Telecom- A developing standard which is intended tomunication resolve issues of integration between TMNInformation (Telecommunications Management Network)Networking and IN (Intellligent Network) standards andArchitecture concepts. TINA focuses on the definition and

validation of an open architecture for world-wide telecom services through a flexible soft-ware architecture for both end-users andnetwork management services.http://www.tinac.com

TxTransmitter The source or generator of any signal on a

transmission medium.

UHFUltra-High Notation used to denote the frequency bandFrequencies from 300 to 3,000 MHz

UMTSUniversal The European member of the IMT-2000Mobile family of 3G wireless standards. UMTSTelecommu- supports data rates of 144 kb/s for vehicularnications traffic, 384 kb/s for pedestrian traffic and upSystem to 2 Mb/s in support of in-building services.

UMTS was initiated by ETSI, but is now stan-dardised by the Third Generation PartnershipProject (3GPP). http://www.3gpp.org/

ISSN 0085-7130 © Telenor ASA 2004

Page 225: 100th Anniversary Issue: Perspectives in telecommunications

223Telektronikk 3.2004

UNINETT The academic network interconnecting theNorwegian universities with the global internet.

WANWide Area A network that provides data communicationsNetwork to a large number of independent users spread

over a larger geographic area than that of aLAN (Local Area Network). It may consistof a number of LAN connected together.

WARCWorld Global conferences held every two to threeAdmin- years by the ITU Radiocommunication Sectoristrative (ITU-R) to revise the Radio Regulations, theRadio international treaty governing the use of theConference radio-frequency spectrum and the geostationary-

satellite and non-geostationary-satellite orbits.In 1995 it was renamed WRC – World Radio-communication Conference. The next confer-ence will be in 2007. http://www.itu.int/ITU-R/conferences/wrc/index.asp

WLANWireless This is a generic term covering a multitude ofLocal Area technologies providing local area networkingNetwork via a radio link. Examples of WLAN tech-

nologies include Wi-Fi (Wireless Fidelity),802.11b and 802.11a, HiperLAN, Bluetoothand IrDA (Infrared Data Association).

WWRFWireless Created in 2001 as a result of the work of theWorld Wireless Strategic Initiative (WSI), a partlyResearch EU-funded project. Started by Alcatel, Ericsson,Forum Nokia and Siemens. The objective of the

forum is to formulate visions on strategicfuture research directions in the wireless field,among industry and academia, and to gener-ate, identify, and promote research areas andtechnical trends for mobile and wirelesssystem technologies.http://www.wireless-world-research.org

WWWWorld An international, virtual network basedWide information service composed of Internet hostWeb computers that provide on-line information.

Created at CERN, Geneva in 1991.http://www.w3c.org/

X.25 The ITU specifications defining packetswitched services. A widely available,low speed, packet switched data service.http://www.itu.int

X.400 The ITU specifications defining message-handling services (MHS). A universal protocolfor email. X.400 defines the envelope foremail messages so all messages conformto a standard format. http://www.itu.int

References1 Newton, H. Newton’s Telecom Dictionary. San Fransisco,

USA, CMP Books, 2003.

2 Laprie, J-C (ed). Dependability: Basic Concepts and Asso-ciated Terminology. Dependable Computing and Fault Tol-erant Systems, Vol-5. Springer Verlag, 1992.

3 Avizienis, A, Laprie, J-C, Randell, B. Fundamental con-cepts of dependability. In: Position Papers for the ThirdInformation Survivability Workshop – ISW-2000. The Insti-tution of Electrical and Electronics Engineers, Inc, October24–26, 2000.

4 ITU-T. Draft new ITU-T Recommendation E.860 (formerlyE.SLA) – Framework of a service level agreement. Geneva,Switzerland, ITU, 2003.

ISSN 0085-7130 © Telenor ASA 2004