What is soho network
-
Upload
rajashree-shree -
Category
Technology
-
view
2.310 -
download
4
Transcript of What is soho network
What is SOHO Network (Short Overview)
Posted on November 7, 2009, 7:06 am, by Danish.
One network type that is growing in popularity is the small office/home office (SOHO) network,
which generally includes less than 10 PCs and may not include servers at all. Network resources
such as DNS server resolution and e-mail servers are generally located offsite, either hosted by
an ISP or at a corporate office. Internet access for the SOHO network is usually provided by
cable, DSL, or perhaps ISDN. The boundary between the LAN and the WAN connections is an
inexpensive router, frequently costing less than $100.
This router may also serve double duty as a firewall to shield the SOHO network from malicious
activity originating outside the network. On the LAN side of the network, either a workgroup
hub or low-end switch may be used to provide interconnections between client PCs and the
router, and many routers include an integral hub or switch. Due to its simplicity, Ethernet is
generally the LAN standard used to wire the SOHO network. Wireless standards such as 802.11b
are starting to appear for the SOHO market, eliminating the need for adding LAN wiring in the
home.
The IP suite of protocols is used for communications on the Internet. In addition to the IP, a
requirement to support other protocols used in the corporate network, or to provide local
communication in the SOHO network, may exist. In later chapters, we will discuss how this
complex environment might be supported. In some instances, when a small office needs to
connect to a corporate environment in a secure manner, some sort of VPN device is either built
into the router itself or on the LAN. Below figure shows an illustration of a SOHO network.
Description of Enterprise Network
Posted on November 14, 2009, 3:14 pm, by Danish.
free person search
forex investments
shares
and software
informations
The largest and most complex of network types is the enterprise network. These networks are
found around the world in the offices of multinational corporations. While a company may have
a main corporate headquarters, the network itself may have more than one data center, acting as a
regional hub. The data centers would be connected to one another using some form of high-speed
WAN; in addition, numerous lower-speed spoke networks radiate from each hub, connecting
branch offices, SOHO telecommuters, and traveling employees. The reliance on computer
networks creates some serious challenges for today’s corporations.
Network reliability and security are essential, particularly when connected to the Internet.
Companies must be willing to make significant investments in hardware, software, and people to
achieve these goals. Not doing so could be fatal. As with the medium-sized company, large
company networks use a variety of LAN technologies. The most common technology is
Ethernet, but other technologies may be found, including Token Ring and Fiber Distributed Data
Interface (FDDI). Unlike smaller companies, the large corporate network most likely evolved
through the years as technology matured, and as mergers, acquisitions, and new branch offices
added new network. segments. As such, the enterprise network could best be conceived as many
different LAN technologies connected by WAN links. Below Figure shows the enterprise
network with hubs and firewalls in place.
Many different networking protocols are likely in the corporate network, particularly in older
more established companies. They will be supporting many legacy applications and protocols
alongside the IP suite. In short, the network is a microcosm of the Internet as a whole, except
under the administrative control of one or more IT professionals. The enterprise network
topology is complex. Typically, the WAN links between the hubs of the network will be
engineered to operate as a high-speed and reliable backbone connection. Each part of the hub
network operates as a transit network for the backbone as well. This means that data from one
remote office to another remote office will be routed through one or more hubs. This backbone
network may be so large and so well engineered that the hubs will also serve as transit networks
for information from other corporations. Since the enterprise network is composed of many hubs,
branch offices, and SOHOs, the internal LAN topology will resemble that of the branch office
closely. Information from the backbone will be distributed to the edges of the network and from
there will access the LANs in a hierarchical fashion. One remote office sending traffic to another
remote office must do so through the backbone because the offices do not share a direct
connection.
Because of the complexity, size, and importance of the information on the network to the
financial health of the company, staff will be devoted solely to network security on the enterprise
network. Users will be strictly policed through the use of passwords, internal firewalls, and
proxy servers. Network usage such as e-mail and Web access will be monitored, and well-
defined and strict network security polices will be in place and enforced on a regular basis.
While branch offices may have a person responsible for the security of that network under
guidelines from the main office, some sort of network operations center will monitor the health
and security of the network full time from a central location. Firewalls, proxy servers, and
intrusion detection hardware and software will also be in use throughout the network to help
provide network security. To protect communications between hubs and between the remote
branch or SOHO user, VPN devices will also be employed. Physically, the network will be
secured as well, and access to servers and workstations will be controlled by locks and identity
checks whenever possible.
Telephone Network Structure in the field of
Telecommunication
Posted on October 7, 2009, 12:20 pm, by Danish.
If you wished, you could create a simple telephone network by running a line between each
person’s telephone and the telephone of every other subscriber to whom that person might wish
to talk. However, the amount of wire required for such a network would be overwhelming.
Interestingly enough, the first telephone installations followed exactly this method; with only a
few telephones in existence, the number of wires were manageable. As the telephone caught on,
this approach proved to be uneconomical. Therefore, the telephone industry of today uses a
switched network, in which a single telephone line connects each telephone to a centralized
switch. This switch provides connections that are enabled only for the period during which two
parties are connected. Once the conversation/ transmission is concluded, the connection is
broken.
This switched network allows all users to share equipment, thereby reducing network costs. The
amount of equipment that is shared by the users is determined by the traffic engineers and is
often a cost tradeoff. Indeed, a guiding principle of network design is to provide a reasonable
grade of service in the most cost-effective manner. The switched network takes advantage of the
fact that not everyone wants to talk at the same time. The direct connection from each telephone
to a local switch is called the local loop (or line) that, in the simplest case, is a pair of wires.
Typically, each subscriber has a dedicated wire pair that serves as the connection to the network.
In party-line service, this local loop is shared by multiple subscribers (in early rural networks,
eight-party service was common).
Most telephone networks require that each switch provide connections between the lines of any
two subscribers that connect to that switch. Because there is a community of interest among the
customers served by a switch, most calls are just line-to-line connections within one switch.
However, any two subscribers should be able to connect, and this requires connections between
switches so customers served by two different switches can complete calls. These switch-to-
switch connections are called trunks. If 10 trunks connect offices A and B, only 10 simultaneous
conversations are possible between subscribers on A talking to B.
But, as soon as one call is concluded, the trunk becomes free to serve another call. Therefore,
many subscribers can share trunks sequentially.Traffic engineers are responsible for calculating
the proper number of trunks to
provide between switches.
Evolution of Computing and Mainframes
Posted on October 19, 2009, 3:31 am, by Danish.
and software
informations
the new millennium
retails
It is hard to imagine life without computers. Computers are everywhere—from small
microprocessors in watches, microwave ovens, cars, calculators, and PCs, to mainframes and
highly specialized supercomputers. A series of hardware and software developments, such as the
development of the microchip, made this revolution possible. Moreover, computers today are
rarely stand-alone devices. They are connected into networks that span the globe to provide us
with a wealth of information. Thus, computers and communications have become increasingly
interdependent. The nature and structure of computer networks have changed in conjunction with
hardware and software technology. Computers and networks have evolved from the highly
centralized mainframe systems of the 1950s and 1960s to the distributed systems of the 1990s
and into the new millennium.
Today’s enterprise networks include a variety of computing devices, such as terminals, PCs,
workstations, minicomputers, mainframes, and supercomputers. These devices can be connected
via a number of networking technologies: data is transmitted over local area networks (LANs)
within a small geographic area, such as within a building; metropolitan area networks (MANs)
provide communication over an area roughly the size of a city; and wide area networks (WANs)
connect computers throughout the world.
Mainframes
The parent of all computers is the mainframe. The first mainframe computers were developed in
the 1940s, but they were largely confined to research and development uses. These machines
were huge—in size and in price. Together with connected input/output (I/O) devices, they
occupied entire rooms. The systems were also highly specialized; they were designed for specific
tasks, such as military applications, and required specialized environments. Not surprisingly, few
organizations could afford to acquire and maintain these costly devices. Any computer is
essentially a device to accept data (i.e., input), process it, and return the results (i.e., output). The
early mainframe computers in the 1950s were primarily large systems placed in a central area
(the computer center), where users physically brought programs and data on punched cards or
paper tapes. Devices, such as card or paper-tape readers, read jobs into the computer’s memory;
the central processing unit (CPU) would then process the jobs sequentially to completion. The
user and computer did not interact during processing.
The systems of the 1950s were stand-alone devices—they were not connected to other
mainframes. The processor communicated only with peripheral I/O devices such as card readers
and printers, over short distances, and at relatively low speeds. In those days, one large computer
usually performed the entire company’s processing. Because of the long execution times
associated with I/O-bound jobs, turnaround times were typically quite long. People often had to
wait 24 hours or more for the printed results of their calculations. For example, by the time
inventory data had been decremented to indicate that a refrigerator had been sold, a day or two
might have passed with additional sales to further reduce inventory. In such a world, the concept
of transaction processing, in which transactions are executed immediately after they are received
by the system, was unheard of. Instead, these early computing systems processed a collection, or
batch, of transactions that took place over an extended time period. This gave rise to the term
batch processing. In batch jobs, a substantial number of records must be processed, with no
particular time criticality. Several processing tasks of today still fit the batch-processing model
perfectly (such as payroll and accounts payable).
Although the mainframe industry has lost market share to vendors of smaller systems, the large
and expensive mainframe system, as a single component in the corporate computing structure, is
still with us today and is not likely to disappear in the near future. IBM is still the leading vendor
of mainframes, with its System/390 computers, and SNA is still the predominant mainframe-
oriented networking architecture. Although IBM has been developing no centralized networking
alternatives, the model for mainframe communications remains centralized, which is perfectly
adequate for several business applications in which users need to access a few shared
applications.
In an airline reservation database application, for example, a users’ primary goal is not to
communicate with each other, but to get up-to-date flight information. It makes sense to maintain
this application in a location that is centrally controlled and can be accessed by everybody.
Moreover, this application requires a large disk storage capacity and fast processing— features a
mainframe provides. Banks and retail businesses also use mainframes and centralized
networking approaches for tasks such as inventory control and maintaining customer
information.
Client or Server Computing History
Posted on October 25, 2009, 11:34 am, by Danish.
shares
downloading
free downloads
build a website
informations
The rapid proliferation of PCs in the workplace quickly exposed a number of their weaknesses.
A stand-alone PC can be extremely inefficient. Any computing device requires some form of I/O
system. The basic keyboard and monitor system is dedicated to one user, as it is hardly expected
that two or more users will share the same PC at the same time. The same is not true for more
specialized I/O devices, with which, for example, two or three printers attached to a mainframe
or minicomputer environment can be accessed by any system user who can download a file to
the printer. It might mean a walk to the I/O window to pick up a printout, but a few printers can
meet many users’ needs. In the stand-alone PC world, the printer is accessible only from the
computer to which it is attached. Because the computer is a single-user system accessible only
from the local keyboard, the printer cannot be shared, and therefore, must be purchased for each
PC that needs to print; otherwise, PCs with dedicated printers must be available for anyone’s use,
in which case a user would take a file or data (usually on a floppy disk) to the printer station to
print it.
This is affectionately referred to as sneakernet. It
doesn’t take a rocket scientist to note the waste in
time, resources, and flexibility of this approach. We
use printers here as just one example of the
inefficiency of stand-alone PCs. Any specialized I/O
device faces the same problems (i.e., plotters,
scanners, and so on), along with such necessities as
hard disk space (secondary storage) and even the
processing power itself. Software is another resource
that cannot be shared in stand-alone systems. Separate
programs must be purchased and loaded on each
station. If a department or company maintains
database information, the database needs to be copied
to any station that needs it. This is a sure-fire formula for inconsistent information or for creating
an information bottleneck at some central administrative site. Finally, the stand-alone PC is
isolated from the resources of the mainframe or minicomputer environment. Important
information assets are not available, usually leading to two or more separate devices on each
desk (such as a PC and a terminal into the corporate network). A vast array of computing power
develops that is completely out of the control of the Information Technology (IT) group. The
result can be (and often is) chaotic.
It rapidly became evident that a scheme was necessary to provide some level of interconnection.
Enter the local area network (LAN). The LAN became the medium to connect these PCs to
shared resources. However, the simple connection of resources and computers was not all that
was required. Sharing these resources effectively requires a server. As an example of server
function, consider again the problem of sharing a printer among a collection of end users.A
printer is inherently a serial device (it prints one thing at a time). A printer cannot print a few
characters submitted by user A, then a few from user B, and so on; it must print user A’s
complete document before printing user B’s job. Simply connecting a printer to the network will
not accomplish the serialization of printing, since users A and B are not synchronized with
respect to job submission.A simple solution to this problem is to attach the printer to the network
via a specialized processor, called a printer server. This processor accumulates inputs from users
A and B, assembles each collection of inputs into a print job, and then sends the jobs to the
printer in serial fashion.
The printer server can also perform such tasks as
initializing the printer and downloading fonts. The
server must have substantial memory capability to
assemble the various jobs, and it must contain the
logic required to build a number of print queues (to
prioritize the stream of printer jobs). A second
example of a server’s function involves a shared
database connected to the network. In most systems,
different users have different privileges associated
with database access. Some might not be allowed
access to certain files, others might be allowed to
access these files to read information but not write to
the files, while still others might have full read/write
access. When multiple users can update files, a gate-keeping task must be performed, so that
when user A has accessed a given file, user B is denied access to the file until user A is finished.
Otherwise, user B could update a file at the central location while user A is still working on it,
causing file overwrites. Some authority must perform file locking to assure that databases are
correctly updated. In sophisticated systems, locking could be performed on a record (rather than
a file) basis—user B can access any record that user A has not downloaded, but B cannot obtain
access to a record currently being updated.
The job of the file (or database) server is to enforce security
measures and guarantee consistency of the shared database.
The file server must have substantial resources to store all the
requisite databases and enough processing power to respond
quickly to the many requests submitted via the network. Many other server types are available.
For example, a communications server might manage requests for access to remote resources
(offsite resources that are not maintained locally). This server would allow all requests for
remote communication to be funneled through a common processor, and it would provide an
attachment point for a link to a WAN. Application servers might perform specialized
computational tasks (graphics is a good example), minimizing the requirement for sophisticated
hardware deployed at every network location. Servers are sometimes simply PCs, but they are
often specialized high-speed machines called workstations. In some environments, the servers
might even be minicomputers or mainframes. Those computers that do not provide a server
function are typically called clients, and most PC networks are client/server oriented.