Tuesday, October 30, 2007

Multimedia

Pengenalan
· Teknologi tinggi pada masa kini.
· Multimedia boleh digunakan oleh pelbagai masyarakat dari latarbelakang yang sangat berlainan
- Dari disiplin kerja dan profesion yang berbeza
- Ada yang teknikal dan yang tidak teknikal (KREATIF)
- Kumpulan hiburan dan pendidikan menerokai aplikasi baru, industri komputer, telekomunikasi, elektronik mewujudkan teknologi berlainan.

Contoh :
- Pengguna komputer mungkin mengguna persembahan multimedia, stesenkerja multimedia, pangkalan data multimedia
- Pembangun multimedia yang terlibat ialah pengarang persembahan, perekabentuk stesenkerja, penyelidik pangkalan data (iaitu dari golongan pembangun yang teknikal dan ada yang dari bidang pengkhususan yang lebih kreatif)

Penterjemahan Langsung
Media - Merujuk kepada satu bentuk interaksi manusia yang bersesuaian dengan perolehan dan pemprosesan menggunakan komputer seperti video, audio, teks, grafik, animasi

Multi - Menandakan beberapa daripada media tersebut wujud dalam sistem atau aplikasi yang sama

Perbincangan Takrifan Multimedia
· Ada banyak pandangan berbeza mengenai takrifan multimedia
· Boleh dikatakan bahwa takrifan perlu diberi berdasarkan konteks

Contoh : Dari aspek
- Aplikasi
- Sistem komputer (perkakasan dan perisian)
- Komunikasi dan rangkaian

Takrifan Multimedia

· Dalam konteks kelas ini, penekanan akan diberi kepada sistem multimedia (perkakasan dan perisian) dan teknologi yang terlibat dalam pengendalian setiap media

· Dalam konteks sistem komputer
- Sistem komputer multimedia yang berkebolehan mengintegrasi dua atau lebih bentuk media dalam satu dokumen elektronik
- Media : Audio, video, animasi, grafik dan teks

Terminologi
· Multimedia
· Information/Maklumat
· Aplikasi – Program komputer yang menyediakan tujuan yang specifik

Cth : Word, Powerpoint, Excel
· Interaktif – Aplikasi dalam kawalan (menyediakan pilihan kepada pengguna)
· Developers (Pembangun) – Orang yang terlibat dalam pembangunan produk multimedia
· User (Pengguna) – Biasa disebut End-User. Individu ataupun kumpulan yang menggunakan produk multimedia
· Authoring tools (alat pengarangan) – Perisian yang digunakan untuk membangunkan produk multimedia
· Produk – Produk multimedia yang lengkap dibangunkan di mana boleh digunakan kan oleh pengguna

Teknologi Multimedia
· Kewujudan sistem dan perkhidmatan multimedia bergantung kepada kemajuan dalam
1. sistem senibina komputer
2. sistem pengoperasian
3. komunikasi rangkaian
4. teknologi perisian
5. antaramuka

Contoh
- Pemproses ‘integrated-circuit’ digital murah dan chip memori yang cukup pantas bagi mengendali kesemua media dalam bentuk digital
- Cakera murah dengan muatan yang mencukupi bagi menyimpan kuantiti yang sesuai bagi semua media
- Penghantaran data melalui komunikasi rangkaian bukan sahaja melalui fiber-optik malah kabel televisyen biasa, dawai telefon dan sebagainya

Bidang-bidang penggunaan aplikasi Multimedia
· Penggunaan multimedia terhadap pengguna
- Penyelidikan dan bahan rujukan
- Pendidikan
- Hiburan dan permainan
- Tele-pemasaran

· Penggunaan multimedia terhadap pengeluar atau pengedar
- Kiosks
- Penghantaran maklumat rangkaian
- Jualan dan persembahan
- Simulasi dan latihan
- Penerbitan elektronik

· Penggunaan multimedia terhadap pekerja
- Pangkalan data multimedia
- Visualisasi data
- Groupware
- Komunikasi

Monday, October 29, 2007

http vs https

Difference between http:// and https:// Very important... must know!

The main difference between http:// and https:// is it's all about keeping you secure HTTP stands for HyperText Transport Protocol, which is just a fancy way of saying it's a protocol (a language, in a manner of speaking) for information to be passed back and forth between web servers and clients. The important thing is the letter S which makes the difference between HTTP and HTTPS.

The S (big surprise) stands for "Secure". If you visit a website or webpage, and look at the address in the web browser, it will likely begin with the following: http:// This means that the website is talking to your browser using the regular 'unsecure' language. In other words, it is possible for someone to "eavesdrop" on your computer's conversation with the website. If you fill out a form on the website, someone might see the information you send to that site. This is why you never ever ever enter your credit card number in an http website! But if the web address begins with https:// that basically means your computer is talking to the website in a secure code that no one can eavesdrop on. You understand why this is so important, right? If a website ever asks you to enter your credit card information, you should automatically look to see if the web address begins with https://

If it doesn't, there's no way you're going to enter sensitive information like a credit card number!

Information System (IS)

Executive Information System (EIS)
An Executive Information System (EIS) is a type of
management information system intended to facilitate and support the information and decision making needs of senior executives by providing easy access to both internal and external information relevant to meeting the strategic goals of the organization. It is commonly considered as a specialized form of a Decision Support System (DSS).

The emphasis of EIS is on graphical displays and easy-to-use
user interfaces. They offer strong reporting and drill-down capabilities. In general, EIS are enterprise-wide DSS that help top-level executives analyze, compare, and highlight trends in important variables so that they can monitor performance and identify opportunities and problems. EIS and data warehousing technologies are converging in the marketplace.

Management Information Systems (MIS)
Management Information Systems (MIS) is a general name for the academic discipline covering the application of people, technologies, and procedures — collectively called
information systems — to solve business problems. MIS are distinct from regular information systems in that they are used to analyze other information systems applied in operational activities in the organization.[1] Academically, the term is commonly used to refer to the group of information management methods tied to the automation or support of human decision making, e.g. Decision Support Systems, Expert systems, and Executive information systems.[1]

Decision support systems (DSS)
Decision support systems are a class of computer-based
information systems including knowledge based systems that support decision making activities.

Because there is no exact definition of DSS, there is obviously no agreement on the standard characteristics and capabilities of DSS. Turban, E.,Aronson, J.E., and Liang, T.P. [23] constitute an ideal set of characteristics and capabilities of DSS. The key DSS characteristics and capabilities are as follows:
* Support for decision makers in semistructured and unstructured problems.
* Support managers at all levels.
* Support individuals and groups.
* Support for interdependent or sequential decisions.
* Support intelligence, design, choice, and implementation.
* Support variety of decision processes and styles.
* DSS should be adaptable and flexible.
* DSS should be interactive and provide ease of use.
* Effectiveness balanced with efficiency (benefit must exceed cost).
* Complete control by decision-makers.
* Ease of development by (modification to suit needs and changing environment) end users.
* Support modeling and analysis.
* Data access.
* Standalone, integration and Web-based.

OSI

The Open Systems Interconnection Basic Reference Model (OSI Reference Model or OSI Model for short) is a layered, abstract description for communications and computer network protocol design, developed as part of the Open Systems Interconnection (OSI) initiative. It is also called the OSI seven layer model.

The layers, described below, are, from top to bottom:
1. Application
2. Presentation
3. Session
4. Transport
5. Network
6. Data Link, and
7. Physical.

Transmission media

Transmission media are the physical pathways that connect computers, other devices, and people on a network—the highways and byways that comprise the information superhighway. Each transmission medium requires specialized network hardware that has to be compatible with that medium. You have probably heard terms such as Layer 1, Layer 2, and so on. These refer to the OSI reference model, which defines network hardware and services in terms of the functions they perform. (The OSI reference model is discussed in detail in Chapter 5, "Data Communications Basics.") Transmission media operate at Layer 1 of the OSI model: They encompass the physical entity and describe the types of highways on which voice and data can travel.

It would be convenient to construct a network of only one medium. But that is impractical for anything but an extremely small network. In general, networks use combinations of media types. There are three main categories of media types:

Copper cable — Types of cable include unshielded twisted-pair (UTP), shielded twisted-pair (STP), and coaxial cable. Copper-based cables are inexpensive and easy to work with compared to fiber-optic cables, but as you'll learn when we get into the specifics, a major disadvantage of cable is that it offers a rather limited spectrum that cannot handle the advanced applications of the future, such as teleimmersion and virtual reality.

Wireless — Wireless media include radio frequencies, microwave, satellite, and infrared. Deployment of wireless media is faster and less costly than deployment of cable, particularly where there is little or no existing infrastructure (e.g., Africa, Asia-Pacific, Latin America, eastern and central Europe). Wireless is also useful where environmental circumstances make it impossible or cost-prohibitive to use cable (e.g., in the Amazon, in the Empty Quarter in Saudi Arabia, on oil rigs).

There are a few disadvantages associated with wireless, however. Historically, wireless solutions support much lower data rates than do wired solutions, although with new developments in wireless broadband, that is becoming less of an issue (see Part IV, "Wireless Communications"). Wireless is also greatly affected by external impairments, such as the impact of adverse weather, so reliability can be difficult to guarantee. However, new developments in laser-based communications—such as virtual fiber—can improve this situation. (Virtual fiber is discussed in Chapter 15, "WMANs, WLANs, and WPANs.") Of course, one of the biggest concerns with wireless is security: Data must be secured in order to ensure privacy.

Fiber optics — Fiber offers enormous bandwidth, immunity to many types of interference and noise, and improved security. Therefore, fiber provides very clear communications and a relatively noise-free environment. The downside of fiber is that it is costly to purchase and deploy because it requires specialized equipment and techniques.

Data transmission mode

Most internal PC data channels support simultaneous bi-directional flow of signals, but communication channels between the PC and the outside world are not so robust.

Here you will be learning about the different alternatives that are available for the design of communication channel. These alternatives apply to both Analog and Digital channels and they are:
* Simplex
* Half duplex
* Full duplex Simplex

The simplest signal flow technique is the Simplex configuration. Simplex allows transmission in only one direction and is a unidirectional channel. Note the difference between simplex and half-duplex.

Half-duplex refers to two-way communications where only one party can transmit at a time.

Simplex refers to one-way communications where one party is the transmitter and the other is the receiver. An example of simplex communications is a simple radio or television, which you can receive data from stations but can't transmit data.

Advantages of Simplex
* Cheapest Communication method

Disadvantages of Simplex
* Only allows for communication in one direction

Half Duplex
Half Duplex refers to the transmission of data in just one direction at a time. For example, a walkie-talkie is a half-duplex device because only one party can talk at a time.

In contrast, a telephone is a full-duplex device because both parties can talk simultaneously.Most modems contain a switch that lets you select between half-duplex and full-duplex modes. The correct choice depends on which program you are using to transmit data through the modem.

In half-duplex mode, each character transmitted is immediately displayed on your screen. (For this reason, it is sometimes called local echo -- characters are echoed by the local device).

In full-duplex mode, transmitted data is not displayed on your monitor until it has been received and returned (remotely echoed) by the other device. If you are running a communications program and every character appears twice, it probably means that your modem is in half-duplex mode when it should be in full-duplex mode, and every character is being both locally and remotely echoed.

Advantages of Half Duplex
* Costs less than full duplex
* Enables for two-way Communications.

Disadvantages of Half Duplex
* Only one device can transmit at a time
* Costs more than simplexFull DuplexFull Duplex refers to the transmission of data in two directions simultaneously.

For example, a telephone is a full-duplex device because both parties can talk at once. In contrast, a walkie-talkie is a half-duplex device because only one party can transmit at a time.

Most modems have a switch that lets you choose between full-duplex and half-duplex modes. The choice depends on which communications program you are running.In full-duplex mode, data you transmit does not appear on your screen until it has been received and sent back by the other party. This enables you to validate that the data has been accurately transmitted. If your display screen shows two of each character, it probably means that your modem is set to half-duplex mode when it should be in full-duplex mode.

Advantages of Full Duplex
Enables for two-way Communications simultaneously.

Disadvantages of Full Duplex
The most expensive method in terms of equipment because two bandwidth channels are needed.

Virtual Memory

Virtual memory is a common part of most operating systems on desktop computers. It has become so common because it provides a big benefit for users at a very low cost.

Most computers today have something like 64 or 128 megabytes of
RAM (random-access memory) available for use by the CPU (central processing unit). Often, that amount of RAM is not enough to run all of the programs that most users expect to run at once. For example, if you load the Windows operating system, an e-mail program, a Web browser and word processor into RAM simultaneously, 64 megabytes is not enough to hold it all. If there were no such thing as virtual memory, your computer would have to say, "Sorry, you cannot load any more applications. Please close an application to load a new one." With virtual memory, the computer can look for areas of RAM that have not been used recently and copy them onto the hard disk. This frees up space in RAM to load the new application. Because it does this automatically, you don't even know it is happening, and it makes your computer feel like is has unlimited RAM space even though it has only 32 megabytes installed. Because hard-disk space is so much cheaper than RAM chips, virtual memory also provides a nice economic benefit.

The area of the hard disk that stores the RAM image is called a page file. It holds pages of RAM on the hard disk, and the operating system moves data back and forth between the page file and RAM. (On a Windows machine, page files have a .SWP extension.)

Of course, the read/write speed of a hard drive is much slower than RAM, and the technology of a hard drive is not geared toward accessing small pieces of data at a time. If your system has to rely too heavily on virtual memory, you will notice a significant performance drop. The key is to have enough RAM to handle everything you tend to work on simultaneously. Then, the only time you "feel" the slowness of virtual memory is in the slight pause that occurs when you change tasks. When you have enough RAM for your needs, virtual memory works beautifully. When you don't, the operating system has to constantly swap information back and forth between RAM and the hard disk. This is called thrashing, and it can make your computer feel incredibly slow.

Operating System (OS)


The most important program that runs on a computer. Every general-purpose computer must have an operating system to run other programs. Operating systems perform basic tasks, such as recognizing input from the keyboard, sending output to the display screen, keeping track of files and directories on the disk, and controlling peripheral devices such as disk drives and printers.

For large
systems, the operating system has even greater responsibilities and powers. It is like a traffic cop -- it makes sure that different programs and users running at the same time do not interfere with each other. The operating system is also responsible for security, ensuring that unauthorized users do not access the system.

Operating systems can be classified as follows:
multi-user : Allows two or more users to run programs at the same time. Some operating systems permit hundreds or even thousands of concurrent users.
multiprocessing : Supports running a program on more than one CPU.
multitasking : Allows more than one program to run concurrently.
multithreading : Allows different parts of a single program to run concurrently.
real time: Responds to input instantly. General-purpose operating systems, such as DOS and UNIX, are not real-time.

Operating systems provide a
software platform on top of which other programs, called application programs, can run. The application programs must be written to run on top of a particular operating system. Your choice of operating system, therefore, determines to a great extent the applications you can run. For PCs, the most popular operating systems are DOS, OS/2, and Windows, but others are available, such as Linux.

As a user, you normally interact with the operating system through a set of
commands. For example, the DOS operating system contains commands such as COPY and RENAME for copying files and changing the names of files, respectively. The commands are accepted and executed by a part of the operating system called the command processor or command line interpreter. Graphical user interfaces allow you to enter commands by pointing and clicking at objects that appear on the screen.

Back up solution

Both differential and incremental backups are "smart" backups that save time and disk space by only backing up changed files. But they differ significantly in how they do it — and how useful the result is.

A full backup created from within Windows, of course, backs up all the files in a partition or on a disk by copying all disk sectors with data to the backup image file. Creating a full backup for unknown or damaged filesystems Acronis True Image copies all sectors to the image file, whether or not the sector contains data. This is the simplest form of backup, but it is also the most time-consuming, space-intensive and the least flexible.

Typically full backups are only done once a week and are part of an overall backup plan. Sometimes a full backup is done after a major change of the data on the disk, such as an operating system upgrade or software install. The relatively long intervals between backups mean that if something goes wrong, a lot of data is going to be lost. That's why it is wise to back up data between full backups.

Most of the information on a computer changes very slowly or not at all. This includes the applications themselves, the operating system and even most of the user data. Typically, only a small percentage of the information in a partition or disk changes on a daily, or even a weekly, basis. For that reason, it makes sense only to back up the data that has changed on a daily basis. This is the basis of sophisticated backup strategies.

Differential backups were the next step in the evolution of backup strategies. A differential backup backs up only the files that changed since the last full back. For example, suppose you do a full backup on Sunday. On Monday you back up only the files that changed since Sunday, on Tuesday you back up only the files that changed since Sunday, and so on until the next full backup. Differential backups are quicker than full backups because so much less data is being backed up. But the amount of data being backed up grows with each differential backup until the next full back up. Differential backups are more flexible than full backups, but still unwieldy to do more than about once a day, especially as the next full backup approaches.
Incremental backups also back up only the changed data, but they only back up the data that has changed since the last backup — be it a full or incremental backup. They are sometimes called "differential incremental backups," while differential backups are sometimes called "cumulative incremental backups." Confused yet? Don't be.

If you do an incremental backup on Tuesday, you only back up the data that changed since the incremental backup on Monday. The result is a much smaller, faster backup. The characteristic of incremental backups is the shorter the time interval between backups, the less data to be backed up. In fact, with sophisticated backup software like Acronis True Image, the backups are so small and so fast you can actually back up every hour, or even more frequently, depending on the work you're doing and how important it is to have current backups.
While incremental backups give much greater flexibility and granularity (time between backups), they have the reputation for taking longer to restore because the backup has to be reconstituted from the last full backup and all the incremental backups since. Acronis True Image uses special snapshot technology to rebuild the full image quickly for restoration. This makes incremental backups much more practical for the average enterprise.

RAID

Short for Redundant Array of Independent (or Inexpensive) Disks, a category of disk drives that employ two or more drives in combination for fault tolerance and performance. RAID disk drives are used frequently on servers but aren't generally necessary for personal computers.

There are number of different RAID levels:
Level 0 -- Striped Disk Array without Fault Tolerance: Provides data striping (spreading out blocks of each file across multiple disk drives) but no redundancy. This
improves performance but does not deliver fault tolerance. If one drive fails then all data in the array is lost.

Level 1 -- Mirroring and Duplexing: Provides disk mirroring. Level 1 provides twice the read transaction rate of single disks and the same write transaction rate as single disks.

Level 2 -- Error-Correcting Coding: Not a typical implementation and rarely used, Level 2 stripes data at the bit level rather than the block level.

Level 3 -- Bit-Interleaved Parity: Provides byte-level striping with a dedicated parity disk. Level 3, which cannot service simultaneous multiple requests, also is rarely used.

Level 4 -- Dedicated Parity Drive: A commonly used implementation of RAID, Level 4 provides block-level striping (like Level 0) with a parity disk. If a data disk fails, the parity data is used to create a replacement disk. A disadvantage to Level 4 is that the parity disk can create write bottlenecks.

Level 5 -- Block Interleaved Distributed Parity: Provides data striping at the byte level and also stripe error correction information. This results in excellent performance and good fault tolerance. Level 5 is one of the most popular implementations of RAID.

Level 6 -- Independent Data Disks with Double Parity: Provides block-level striping with parity data distributed across all disks.

Level 0+1 – A Mirror of Stripes: Not one of the original RAID levels, two RAID 0 stripes are created, and a RAID 1 mirror is created over them. Used for both replicating and sharing data among disks.

Level 10 – A Stripe of Mirrors: Not one of the original RAID levels, multiple RAID 1 mirrors are created, and a RAID 0 stripe is created over these.

Level 7: A trademark of Storage Computer Corporation that adds caching to Levels 3 or 4.
RAID S: EMC Corporation's proprietary striped parity RAID system used in its Symmetrix
storage systems.

SDLC Model

System Development Life Cycle (SDLC) models:
1. Waterfall
2. Fountain
3. Spiral
4. Build and fix
5. Rapid prototyping
6. Incremental, and
7. Synchronize and stabilize.

WATERFALL
The oldest of these, and the best known, is the waterfall: a sequence of stages in which the output of each stage becomes the input for the next. These stages can be characterized and divided up in different ways, including the following:
Project planning, feasibility study: Establishes a high-level view of the intended project and determines its goals.


Systems analysis, requirements definition: Refines project goals into defined functions and operation of the intended application. Analyzes end-user information needs.

Systems design: Describes desired features and operations in detail, including screen layouts, business rules, process diagrams, pseudocode and other documentation.

Implementation: The real code is written here.

Integration and testing: Brings all the pieces together into a special testing environment, then checks for errors, bugs and interoperability.

Acceptance, installation, deployment: The final stage of initial development, where the software is put into production and runs actual business.

Maintenance: What happens during the rest of the software's life: changes, correction, additions, moves to a different computing platform and more. This, the least glamorous and perhaps most important step of all, goes on seemingly forever.

The waterfall model is well understood, but it's not as useful as it once was. In a 1991 Information Center Quarterly article, Larry Runge says that SDLC "works very well when we are automating the activities of clerks and accountants. It doesn't work nearly as well, if at all, when building systems for knowledge workers -- people at help desks, experts trying to solve problems, or executives trying to lead their company into the Fortune 100."

Another problem is that the waterfall model assumes that the only role for users is in specifying requirements, and that all requirements can be specified in advance. Unfortunately, requirements grow and change throughout the process and beyond, calling for considerable feedback and iterative consultation. Thus many other SDLC models have been developed.

FOUNTAIN

The fountain model recognizes that although some activities can't start before others -- such as you need a design before you can start coding -- there's a considerable overlap of activities throughout the development cycle.

SPIRAL

The spiral model emphasizes the need to go back and reiterate earlier stages a number of times as the project progresses. It's actually a series of short waterfall cycles, each producing an early prototype representing a part of the entire project. This approach helps demonstrate a proof of concept early in the cycle, and it more accurately reflects the disorderly, even chaotic evolution of technology.

BUILD & FIX

Build and fix is the crudest of the methods. Write some code, then keep modifying it until the customer is happy. Without planning, this is very open-ended and can by risky.

RAPID PROTOTYPE

In the rapid prototyping (sometimes called rapid application development) model, initial emphasis is on creating a prototype that looks and acts like the desired product in order to test its usefulness. The prototype is an essential part of the requirements determination phase, and may be created using tools different from those used for the final product. Once the prototype is approved, it is discarded and the "real" software is written.

INCREMENTAL

The incremental model divides the product into builds, where sections of the project are created and tested separately. This approach will likely find errors in user requirements quickly, since user feedback is solicited for each stage and because code is tested sooner after it's written.

SYNCHRONIZE & STABILIZE

The synchronize and stabilize method combines the advantages of the spiral model with technology for overseeing and managing source code. This method allows many teams to work efficiently in parallel. This approach was defined by David Yoffie of Harvard University and Michael Cusumano of MIT. They studied how Microsoft Corp. developed Internet Explorer and Netscape Communications Corp. developed Communicator, finding common threads in the ways the two companies worked. For example, both companies did a nightly compilation (called a build) of the entire project, bringing together all the current components. They established release dates and expended considerable effort to stabilize the code before it was released. The companies did an alpha release for internal testing; one or more beta releases (usually feature-complete) for wider testing outside the company, and finally a release candidate leading to a gold master, which was released to manufacturing. At some point before each release, specifications would be frozen and the remaining time spent on fixing bugs.

Both Microsoft and Netscape managed millions of lines of code as specifications changed and evolved over time. Design reviews and strategy sessions were frequent, and everything was documented. Both companies built contingency time into their schedules, and when release deadlines got close, both chose to scale back product features rather than let milestone dates slip.

Database Design Approach

Importance of the Data Model

An effective Database design ensures that key aspects for a successful project implementation are executed within expected time-lines that eventually lead to a cost-effective approach during the project development phase. It's common that almost every application designed and developed will need some sort of Database or Data storage functionality.


Therefore, it is imperative during the design phase that a Data Model is constructed. A Data Model is required to be strictly followed or updated as and when the design might change. This is one of the crucial reference instruments that not only the project development members consisting of both application programmers and Database Administrators will need but also the client / customer; as many questions that would have not been foreseen otherwise would arise from close observation of these representations.

Database redundancy is a No-No

The creation of Entities in your Data Model leads to a representation of what an actual real life scenario would and should look like. Such an approach leads to minimization of data redundancy, restructuring and Input / Output transaction sizes along with referential integrity constraints - Database Normalization in short. Data should never be replicated and stored in another location whereby a required change for that data instance does not force a dependency upon the DBA or developer to implement the same change on the same data else where. Data redundancy is a definite no-no. Enforcement of Normalization principles is one of the key practices for effective database designing that look into the problem of data redundancy. It helps in organizing data within the database efficiently. There are rules to be followed within Normalization and these rules are defined in the terms of "Normal Forms". It is however another matter that in a real-time scenario, implementing the needed normal form is not accomplished and this is where our design might start floundering.

The First Normal Form

The First Normal Form stipulates that no repeating groups or sets should be stored within a table. These similar groups should be stored in a separate table. To identify each unique column of these tables, a primary key should be established.

The Second Normal Form

Data Redundancy should be non-existent. Foreign key constraints should be enforced.

The Third Normal Form

Every column in the table should be related and dependant on the primary key of the table. If this is not possible, the field should be stored in a new table with a new key.

The Fourth Normal Form:

This one would probably exists somewhere in dream land - the elimination of independent relationships in a Relational Database.

The Fifth Normal Form: Exists in never-never land - it calls for a perfect segregation of logically related many-to-many relationships.

So now you know something about relationships (that's what our whole Relational Database thing is about right?). But just keep in mind that as we increase and tighten our relationship enforcements, there would be a little trade off with performance.

The design approach

I believe that a good design should implement, at the least, a 2nd Normal Form. I also feel that we should try our maximum effort to look into implementing a 3rd Normal Form for large applications that have scope for enhancement. Scalability should always be up our minds while designing various components of our application. We can keep dreaming of a 4th and 5th Normal Form but I like to hear from Architects and DBAs on some of the rules they follow when implementing Normalizations.

However, a third normal form is sometimes an ardent task in itself. This may require you to create a host of smaller entity tables to reduce the prospect of data being replicated across tables. This approach, although an enforcer has potential to reduce performance while running SQL queries. I had a conversation with a fellow developer where he mentioned that their team had knowingly broke one of the 3rd Normalization rules since they knew the consequence would not create any dependency issues for their application. He mentioned that they used comma separated values of primary keys stored in a particular field of another table. I am still unclear on a verdict if such an approach is good. I like to hear views from DBAs and Architects on this one..

Data Objects & Properties: Counters parts for Database Tables and Table columns

While designing the class diagram of our application, we would design classes that would be used as Data Objects. These Data Objects contain Properties defined by getter and setter keywords. These properties are actually a descriptive and quantitative representation of an element of the entity. Data Objects and their Properties are actual class representations of our real-life entity players (Employee, Office, Department etc). On the database side, these would be translated into Tables; while Data Object attributes are counterparts for table columns that are defined in your database table.

We need Primary Keys:

While designing your table, a primary key is imperative. We make this possible by declaring a particular column as PRIMARY. Primary keys are attribute columns that would be used as a unique identifier for an occurrence of our entity (table). It always has to contain some value. Sometimes it is enough to create an identity column that auto increments an integer value starting with 1. But I've come across situations where it is suggested and implemented that we create a primary key naming convention thereby enhance our primary key from being just another primary key but also an effective primary key that describes some extra information where the field data would be display in the presentation layer itself (a ticket no. for instance). I would appreciate it if anyone out there could shed some light on whether this is a good practice or not. There is this school of thought that says that Primary keys should be non-intelligent and their meanings easy to decipher.

PRIMARY keys are great for identifying a particular row in your table. It is also another great way to speed up the retrieval information process from your table.

We may need Unique Keys:

Declaring a column as UNIQUE is another way to identify unique rows in your table. It is also called the alternate key. It is exactly similar to the PRIMARY key concept except that 1. you can have more than one unique key 2. you can also have null value unique keys.

We need Relationships and Enforcers

Referential Integrity among entities (or tables) is enforced by the application of relationships. This is a logical link between entities. A business rule that is implemented via Foreign keys in the Database. This business rule ensures that the value that is in the child table must be present as a primary key value in the parent table. A Foreign key is a column(s) whose values are a resultant from the primary key table.

Some of the business rules that a Foreign Key Constraint enforces:
1. It must reference a PRIMARY key or UNIQUE column(s) in PRIMARY key table

2. Datatypes of the FOREIGN key column(s) and the Constraint Columns(s) must be exactly the same.

3. Operations that perform INSERT and UPDATE on a table are not allowed when corresponding value(s) in primary keys are non-existent.

4. A DELETE operation is not permitted if a REFERENCE CONSTRAINT is cancelled.

5. It will reference the PRIMARY key of the primary key table if no column or group of columns is specified in the constraint.

6. There are no restrictions for a foreign key while other constraints refer the same table.

Different types of relationships exists - the one-to-one relationship, the one-to-many relationship and the many-to-many relationship.

One-To-One and One-To-Many relationships

A one-to-many relationship implies that a table element might be referred by many other entities via your table's PRIMARY KEY. Hence, on the database side, you may have a relationship between a DEPARTMENT table (parent) and an EMPLOYEE table (child) along with a relationship between a DEPARTMENT table (parent) and another child table. This means that your one DEPARTMENT entity instance could contain many EMPOYEE entity instances. I hope this clarifies on both relationships.

Many-To-Many relationships

A many-to-many relationship, we have a situation where many entity instances are related to many other entity instances. The only way to resolve this situation and enforcing the normalization principle of minimizing redundancy is to create an intermediary table that would contain primary keys from the earlier mentioned tables. Take a situation in your organization where many Employees can have multiple Roles (Project Manager and Configuration Manager?). Hence, we would have multiple EMPLOYEE Table instances related to Multiple Roles. Creation of multiple links between two tables would lead to replication of data which is bad and will create intertwining relationships that can definitely wreck any good application you have in mind.

Fix: The solution would be to create a third table which would act as a cross reference table. This cross reference table (commonly known as X-REF) will contain primary key columns from the earlier two tables and hence we have a relationship where our X-REF table is actually a child table to the earlier two parent tables. We map the multiple relationships through a third relationship in a third table. Hence, we're breaking our task by crafting two individual one-to-many relationships so our situation would lead us to create two tables EMPLOYEE (child) and EMPLOYEE_ROLES which would be a parent to a child table called ROLES which contains two columns that are foreign key relationships to the parent table primary keys.

[Source]

Data Modelling

If the question goes like this.

There are 3 phases involved in the database modelling - conceptual, logical and physical. Give 3 main features of each phase.

I think the relevant answer is this.

ANSWER

There are three levels of data modeling. They are conceptual, logical, and physical. This section will explain the difference among the three, the order with which each one is created, and how to go from one level to the other.

Conceptual Data Model

Features of conceptual data model include:
* Includes the important entities and the relationships among them.
* No attribute is specified.
* No primary key is specified.

At this level, the data modeler attempts to identify the highest-level relationships among the different entities.

Logical Data Model

Features of logical data model include:
* Includes all entities and relationships among them.
* All attributes for each entity are specified.
* The primary key for each entity specified.
* Foreign keys (keys identifying the relationship between different entities) are specified.
* Normalization occurs at this level.

At this level, the data modeler attempts to describe the data in as much detail as possible, without regard to how they will be physically implemented in the database.

In data warehousing, it is common for the conceptual data model and the logical data model to be combined into a single step (deliverable).

The steps for designing the logical data model are as follows:
1. Identify all entities.
2. Specify primary keys for all entities.
3. Find the relationships between different entities.
4. Find all attributes for each entity.
5. Resolve many-to-many relationships.
6. Normalization.

Physical Data Model

Features of physical data model include:
* Specification all tables and columns.
* Foreign keys are used to identify relationships between tables.
* Denormalization may occur based on user requirements.
* Physical considerations may cause the physical data model to be quite different from the logical data model.

At this level, the data modeler will specify how the logical data model will be realized in the database schema.

The steps for physical data model design are as follows:
1. Convert entities into tables.
2. Convert relationships into foreign keys.
3. Convert attributes into columns.
4. Modify the physical data model based on physical constraints / requirements.

[Source]

Data Modelling 1

Quickstudy by Russell Kay

APRIL 14, 2003
(COMPUTERWORLD) - In the real world, we think of data as facts, figures and other bits of knowledge. Put a lot of data items together in a useful form, and you get information — maybe even intelligence.

Often, people can intuitively understand a given piece of data in isolation, but a computer can never do so without help. In computers, we store data in a databasea highly structured, carefully defined and rigidly formatted collection of records — so that we can retrieve it, use it, analyze it and work with it to run our businesses.

In fact, it is only the organization of a database that gives meaning and utility to the data inside it. Without that organization, all we have are undifferentiated ones and zeroes—not numbers, not letters and certainly not knowledge.

Thus a critical step in data processing is the creation of a plan for the database that's simple enough for the end user to understand, yet detailed enough to let the database designer create the actual structure using database software. We call this conceptual plan a data model, though we use this term to describe two related but different ideas.

One, which we can also call a database model, is somewhat abstract in nature and refers to a database's overall structure, or type. The best-known example is the relational model [QuickLink

a3000] used by Oracle, DB2 and SQL Server. Others include flat-file, hierarchical, network, object, semantic and dimensional models.
The second type of data model, or schema, takes the overall structure of one of the standard database models and tailors it to a specific application, company, project or task. This type of data model gets down to specific data items, including their names, values, granularity and how they relate to one another.

We can compare these data models to the plans for a new building. An architect designs different types of buildings—a sports arena vs. a four-bedroom house, for example—using quite different materials and techniques (steel girders vs. wood framing, cranes vs. ladders). So, too, do we implement the various types of database models (say, relational vs. object-oriented) quite differently on a computer.

When we build a schema, however, we're working at a detailed, nitty-gritty level; it's more like consulting an interior designer than an architect. The architect plans for a kitchen's space, wiring and plumbing, but the designer helps decide which appliances to buy, how to group lights, where to put the table and chairs, and how many cabinets are needed.

What's in a Model?
To illustrate what goes into a data model, let's assume we're creating a very simple inventory database for a widget-building assembly line. We need to know the following:
• What data do we include? Parts numbers for our widget models, the subcomponents they're made from, raw materials and parts suppliers, costs and delivery times, inventory on hand and assembly time.

• How do data elements relate to one another? Some suppliers deliver faster than others but charge more.

• What processes, operations and transformations might we need to do? Calculate total cost per widget.

• What kinds of questions will we need to answer? How much are we paying for parts? How many widgets can we produce next week?

• What other business processes or activities might use this data? Accounts payable, product planning and sales.

While building this database, our data model proceeds from conceptual model to logical design to physical implementation:
1. Interview business users.
2. Define the data elements and their relationships.
3. Create a data model.
4. Select the database type and specific database management software (DBMS); often, it will be whatever you're already using.
5. Map data-model elements to tables, and normalize them.
6. Create data type definitions and a database structure.
7. Design the application.

Creating a data model takes a different mind-set than application development does.


In Steps 1 through 3, at the conceptual level, we must think about what we're dealing with.


In Steps 4 through 6, the focus shifts to how we implement the model.


Step 5 marks the transition to logical design and Step 6 to physical design, both of which needed to meet our DBMS's specific requirements. In Step 7, programmers implement the procedures that use and manipulate the data.

Kay (

russkay@charter.net) is a Computerworld contributing writer in Worcester, Mass.





Thursday, October 25, 2007

PSTN vs ISDN

PSTN
Short for Public Switched Telephone Network, which refers to the international telephone system based on copper wires carrying analog voice data. This is in contrast to newer telephone networks base on digital technologies, such as ISDN and FDDI. Telephone service carried by the PSTN is often called plain old telephone service (POTS).

ISDN
Abbreviation of integrated services digital network, an international communications standard for sending voice, video, and data over digital telephone lines or normal telephone wires. ISDN supports data transfer rates of 64 Kbps (64,000 bits per second).

There are two types of ISDN:
> Basic Rate Interface (BRI) -- consists of two 64-Kbps B-channels and one D-channel for transmitting control information.
> Primary Rate Interface (PRI) -- consists of 23 B-channels and one D-channel (U.S.) or 30 B-channels and one D-channel (Europe).

The original version of ISDN employs baseband transmission. Another version, called B-ISDN, uses broadband transmission and is able to support transmission rates of 1.5 Mbps. B-ISDN requires fiber optic cables and is not widely available.

A set of communications standards allowing a single wire or optical fibre to carry voice, digital network services and video. ISDN is intended to eventually replace the plain old telephone system.

ISDN was first published as one of the 1984 ITU-T Red Book recommendations. The 1988 Blue Book recommendations added many new features. ISDN uses mostly existing Public Switched Telephone Network (PSTN) switches and wiring, upgraded so that the basic "call" is a 64 kilobits per second, all-digital end-to-end channel. Packet and frame modes are also provided in some places.

There are different kinds of ISDN connection of varying bandwidth (see DS level):DS0 = 1 channel PCM at 64 kbpsT1 or DS1 = 24 channels PCM at 1.54 MbpsT1C or DS1C = 48 channels PCM at 3.15 MbpsT2 or DS2 = 96 channels PCM at 6.31 MbpsT3 or DS3 = 672 channels PCM at 44.736 MbpsT4 or DS4 = 4032 channels PCM at 274.1 Mbps

Each channel here is equivalent to one voice channel. DS0 is the lowest level of the circuit. T1C, T2 and T4 are rarely used, except maybe for T2 over microwave links. For some reason 64 kbps is never called "T0".

A Basic Rate Interface (BRI) is two 64K "bearer" channels and a single "delta" channel ("2B+D"). A Primary Rate Interface (PRI) in North America and Japan consists of 24 channels, usually 23 B + 1 D channel with the same physical interface as T1. Elsewhere the PRI usually has 30 B + 1 D channel and an E1 interface.

A Terminal Adaptor (TA) can be used to connect ISDN channels to existing interfaces such as EIA-232 and V.35.

Different services may be requested by specifying different values in the "Bearer Capability" field in the call setup message. One ISDN service is "telephony" (i.e. voice), which can be provided using less than the full 64 kbps bandwidth (64 kbps would provide for 8192 eight-bit samples per second) but will require the same special processing or bit diddling as ordinary PSTN calls. Data calls have a Bearer Capability of "64 kbps unrestricted".

ISDN is offered by local telephone companies, but most readily in Australia, France, Japan and Singapore, with the UK somewhat behind and availability in the USA rather spotty. (In March 1994) ISDN deployment in Germany is quite impressive, although (or perhaps, because) they use a specifically German signalling specification, called 1.TR.6. The French Numeris also uses a non-standard protocol (called VN4; the 4th version), but the popularity of ISDN in France is probably lower than in Germany, given the ludicrous pricing. There is also a specifically-Belgian V1 experimental system. The whole of Europe is now phasing in Euro-ISDN

BASEBAND
The original band of frequencies of a signal before it is modulated for transmission at a higher frequency.
1. A type of data transmission in which digital or analog data is sent over a single unmultiplexed channel, such as an Ethernet LAN.
2. Baseband transmission use TDM to send simultaneous bits of data along the full bandwidth of the transmission channel.

BROADBAND
A type of data transmission in which a single medium (wire) can carry several channels at once. Cable TV, for example, uses broadband transmission. In contrast, baseband transmission allows only one signal at a time. Most communications between computers, including the majority of local-area networks, use baseband communications. An exception is B-ISDN networks, which employ broadband transmission.

SNMP
Short for Simple Network Management Protocol, a set of protocols for managing complex networks. The first versions of SNMP were developed in the early 80s. SNMP works by sending messages, called protocol data units (PDUs), to different parts of a network. SNMP-compliant devices, called agents, store data about themselves in Management Information Bases (MIBs) and return this data to the SNMP requesters.

VPN
Short for virtual private network, a network that is constructed by using public wires to connect nodes. For example, there are a number of systems that enable you to create networks using the Internet as the medium for transporting data. These systems use encryption and other security mechanisms to ensure that only authorized users can access the network and that the data cannot be intercepted.

Network Topology

What is a Topology?
The physical topology of a network refers to the configuration of cables, computers, and other peripherals. Physical topology should not be confused with logical topology which is the method used to pass information between workstations. Logical topology was discussed in the Protocol chapter.

Main Types of Physical Topologies
The following sections discuss the physical topologies used in networks and other related topics.
· Linear Bus
· Star
· Star-Wired Ring
· Tree
· Considerations When Choosing a Topology
· Summary Chart

Linear Bus
A linear bus topology consists of a main run of cable with a terminator at each end (See fig. 1). All nodes (file server, workstations, and peripherals) are connected to the linear cable. Ethernet and LocalTalk networks use a linear bus topology.
Fig. 1. Linear Bus topology

Advantages of a Linear Bus Topology
· Easy to connect a computer or peripheral to a linear bus.
· Requires less cable length than a star topology.

Disadvantages of a Linear Bus Topology
· Entire network shuts down if there is a break in the main cable.
· Terminators are required at both ends of the backbone cable.
· Difficult to identify the problem if the entire network shuts down.
· Not meant to be used as a stand-alone solution in a large building.

Star
A star topology is designed with each node (file server, workstations, and peripherals) connected directly to a central network hub or concentrator (See fig. 2).

Data on a star network passes through the hub or concentrator before continuing to its destination. The hub or concentrator manages and controls all functions of the network. It also acts as a repeater for the data flow. This configuration is common with twisted pair cable; however, it can also be used with coaxial cable or fiber optic cable.
Fig. 2. Star topology

Advantages of a Star Topology
· Easy to install and wire.
· No disruptions to the network then connecting or removing devices.
· Easy to detect faults and to remove parts.

Disadvantages of a Star Topology
· Requires more cable length than a linear topology.
· If the hub or concentrator fails, nodes attached are disabled.
· More expensive than linear bus topologies because of the cost of the concentrators.

The protocols used with star configurations are usually Ethernet or LocalTalk. Token Ring uses a similar topology, called the star-wired ring.

Star-Wired Ring
A star-wired ring topology may appear (externally) to be the same as a star topology. Internally, the MAU (multi-station access unit) of a star-wired ring contains wiring that allows information to pass from one device to another in a circle or ring (See fig. 3). The Token Ring protocol uses a star-wired ring topology.

Tree
A tree topology combines characteristics of linear bus and star topologies. It consists of groups of star-configured workstations connected to a linear bus backbone cable (See fig. 4). Tree topologies allow for the expansion of an existing network, and enable schools to configure a network to meet their needs.
Fig. 4. Tree topology

Advantages of a Tree Topology
· Point-to-point wiring for individual segments.
· Supported by several hardware and software venders.

Disadvantages of a Tree Topology
· Overall length of each segment is limited by the type of cabling used.
· If the backbone line breaks, the entire segment goes down.
· More difficult to configure and wire than other topologies.

5-4-3 Rule
A consideration in setting up a tree topology using Ethernet protocol is the 5-4-3 rule. One aspect of the Ethernet protocol requires that a signal sent out on the network cable reach every part of the network within a specified length of time. Each concentrator or repeater that a signal goes through adds a small amount of time. This leads to the rule that between any two nodes on the network there can only be a maximum of 5 segments, connected through 4 repeaters/concentrators. In addition, only 3 of the segments may be populated (trunk) segments if they are made of coaxial cable. A populated segment is one which has one or more nodes attached to it. In Figure 4, the 5-4-3 rule is adhered to. The furthest two nodes on the network have 4 segments and 3 repeaters/concentrators between them.

This rule does not apply to other network protocols or Ethernet networks where all fiber optic cabling or a combination of a fiber backbone with UTP cabling is used. If there is a combination of fiber optic backbone and UTP cabling, the rule is simply translated to 7-6-5 rule.

Considerations When Choosing a Topology:
· Money. A linear bus network may be the least expensive way to install a network; you do not have to purchase concentrators.
· Length of cable needed. The linear bus network uses shorter lengths of cable.
· Future growth. With a star topology, expanding a network is easily done by adding another concentrator.
· Cable type. The most common cable in schools is unshielded twisted pair, which is most often used with star topologies.

Networking Hardware

What is Networking Hardware?
Networking hardware includes all computers, peripherals, interface cards and other equipment needed to perform data-processing and communications within the network. CLICK on the terms below to learn more about those pieces of networking hardware.

This section provides information on the following components:
· File Servers
· Workstations
· Network Interface Cards
· Switches
· Repeaters
· Bridges
· Routers

File Servers
A file server stands at the heart of most networks. It is a very fast computer with a large amount of RAM and storage space, along with a fast network interface card. The network operating system software resides on this computer, along with any software applications and data files that need to be shared.

The file server controls the communication of information between the nodes on a network. For example, it may be asked to send a word processor program to one workstation, receive a database file from another workstation, and store an e-mail message during the same time period. This requires a computer that can store a lot of information and share it very quickly.

File servers should have at least the following characteristics:
· 800 megahertz or faster microprocessor (Pentium 3 or 4, G4 or G5)
· A fast hard drive with at least 120 gigabytes of storage
· A RAID (Redundant Array of Inexpensive Disks) to preserve data after a disk casualty
· A tape back-up unit (i.e. DAT, JAZ, Zip, or CD-RW drive)
· Numerous expansion slots
· Fast network interface card
· At least of 512 MB of RAM

Workstations
All of the user computers connected to a network are called workstations. A typical workstation is a computer that is configured with a network interface card, networking software, and the appropriate cables. Workstations do not necessarily need floppy disk drives because files can be saved on the file server. Almost any computer can serve as a network workstation.

Network Interface Cards
The network interface card (NIC) provides the physical connection between the network and the computer workstation. Most NICs are internal, with the card fitting into an expansion slot inside the computer. Some computers, such as Mac Classics, use external boxes which are attached to a serial port or a SCSI port. Laptop computers can now be purchased with a network interface card built-in or with network cards that slip into a PCMCIA slot.

Network interface cards are a major factor in determining the speed and performance of a network. It is a good idea to use the fastest network card available for the type of workstation you are using.

The three most common network interface connections are Ethernet cards, LocalTalk connectors, and Token Ring cards. According to a International Data Corporation study, Ethernet is the most popular, followed by Token Ring and LocalTalk (Sant'Angelo, R. (1995). NetWare Unleashed, Indianapolis, IN: Sams Publishing).

Ethernet Cards
Ethernet cards are usually purchased separately from a computer, although many computers (such as the Macintosh) now include an option for a pre-installed Ethernet card. Ethernet cards contain connections for either coaxial or twisted pair cables (or both) (See fig. 1). If it is designed for coaxial cable, the connection will be BNC. If it is designed for twisted pair, it will have a RJ-45 connection. Some Ethernet cards also contain an AUI connector. This can be used to attach coaxial, twisted pair, or fiber optics cable to an Ethernet card. When this method is used there is always an external transceiver attached to the workstation. (See the Cabling section for more information on connectors.)
Fig. 1. Ethernet card. From top to bottom: RJ-45, AUI, and BNC connectors

LocalTalk Connectors
LocalTalk is Apple's built-in solution for networking Macintosh computers. It utilizes a special adapter box and a cable that plugs into the printer port of a Macintosh (See fig. 2). A major disadvantage of LocalTalk is that it is slow in comparison to Ethernet. Most Ethernet connections operate at 10 Mbps (Megabits per second). In contrast, LocalTalk operates at only 230 Kbps (or .23 Mbps).
Fig.2. LocalTalk connectors

Token Ring Cards
Token Ring network cards look similar to Ethernet cards. One visible difference is the type of connector on the back end of the card. Token Ring cards generally have a nine pin DIN type connector to attach the card to the network cable.

Switch
A concentrator is a device that provides a central connection point for cables from workstations, servers, and peripherals. In a star topology, twisted-pair wire is run from each workstation to a central switch/hub. Most switches are active, that is they electrically amplify the signal as it moves from one device to another. Switches no longer broadcast network packets as hubs did in the past, they memorize addressing of computers and send the information to the correct location directly. Switches are:
· Usually configured with 8, 12, or 24 RJ-45 ports
· Often used in a star or star-wired ring topology
· Sold with specialized software for port management
· Also called hubs
· Usually installed in a standardized metal rack that also may store netmodems, bridges, or routers

Repeaters
Since a signal loses strength as it passes along a cable, it is often necessary to boost the signal with a device called a repeater. The repeater electrically amplifies the signal it receives and rebroadcasts it. Repeaters can be separate devices or they can be incorporated into a concentrator. They are used when the total length of your network cable exceeds the standards set for the type of cable being used.

A good example of the use of repeaters would be in a local area network using a star topology with unshielded twisted-pair cabling. The length limit for unshielded twisted-pair cable is 100 meters. The most common configuration is for each workstation to be connected by twisted-pair cable to a multi-port active concentrator. The concentrator amplifies all the signals that pass through it allowing for the total length of cable on the network to exceed the 100 meter limit.

Bridges
A bridge is a device that allows you to segment a large network into two smaller, more efficient networks. If you are adding to an older wiring scheme and want the new network to be up-to-date, a bridge can connect the two.

A bridge monitors the information traffic on both sides of the network so that it can pass packets of information to the correct location. Most bridges can "listen" to the network and automatically figure out the address of each computer on both sides of the bridge. The bridge can inspect each message and, if necessary, broadcast it on the other side of the network.
The bridge manages the traffic to maintain optimum performance on both sides of the network. You might say that the bridge is like a traffic cop at a busy intersection during rush hour. It keeps information flowing on both sides of the network, but it does not allow unnecessary traffic through. Bridges can be used to connect different types of cabling, or physical topologies. They must, however, be used between networks with the same protocol.

Routers
A router translates information from one network to another; it is similar to a superintelligent bridge. Routers select the best path to route a message, based on the destination address and origin. The router can direct traffic to prevent head-on collisions, and is smart enough to know when to direct traffic along back roads and shortcuts.

While bridges know the addresses of all computers on each side of the network, routers know the addresses of computers, bridges, and other routers on the network. Routers can even "listen" to the entire network to determine which sections are busiest -- they can then redirect data around those sections until they clear up.

If you have a school LAN that you want to connect to the Internet, you will need to purchase a router. In this case, the router serves as the translator between the information on your LAN and the Internet. It also determines the best route to send the data over the Internet. Routers can:
· Direct signal traffic efficiently
· Route messages between any two protocols
· Route messages between linear bus, star, and star-wired ring topologies
· Route messages across fiber optic, coaxial, and twisted-pair cabling