As 10 Gigabit Ethernet (10GbE) is introduced into networks the physical limitations and properties of optical fiber introduce new challenges for a network designer. Due to the increased data rate, fiber effects, such as dispersion (intermodal, chromatic or polarization), become a factor in the achievable distances of 10GbE links. This leaves the network designer with new decisions and trade-offs that he/she must understand and overcome.This paper provides an introduction to the world of optical fiber and covers the unique network design issues that 10GbE introduces into an optical fiber network.
There are two different types of optical fiber: multimode and single-mode. Both are used in a broad range of telecommunications and data networking applications. These fiber types have dominated the commercial fiber market since the 1970’s. The distinguishing difference, and the basis for the naming of the fibers, is in the number of modes allowed to propagate in the core of a fiber. A “mode” is an allowable path for the light to travel down a fiber. A multimode fiber allows many light propagation paths, while a single-mode fiber allows only one light path.
In multimode fiber, the time it takes for light to travel through a fiber is different for each mode resulting in a spreading of the pulse at the output of the fiber referred to as intermodal dispersion. The difference in the time delay between the modes is called Differential Mode Delay (DMD). Intermodal dispersion limits multimode fiber bandwidth. This is significant because a fiber’s bandwidth determines its information carrying capacity, i.e., how far a transmission system can operate at a specified bit error rate.
The optical fiber guides the light launched into the fiber core (Figure 1). The cladding is a layer of material that surrounds the core. The cladding is designed so that the light launched into the core is contained in the core. When the light launched into the core strikes the cladding, the light is reflected from the core-to-cladding interface. The condition of total internal reflection (when all of the light launched into the core remains in the core) is a function of both the angle at which the light strikes the core-to-cladding interface and the index of refraction of the materials. The index of refraction (n) is a dimensionless number that characterizes the speed of light in a specific media relative to the speed of light in a vacuum. To confine light within the core of an optical fiber, the index of refraction for the cladding (n1) must be less than the index of refraction for the core (n2).
Fibers are classified in part by their core and cladding dimensions. Single-mode fibers have a much smaller core diameter than multimode fibers. However, the Mode Field Diameter (MFD) rather than the core diameter is used in single-mode fiber specifications. The MFD describes the distribution of the optical power in the fiber by providing an “equivalent” diameter, sometimes referred to as the spot size. The MFD is always larger than the core diameter with nominal values ranging between 8-10 microns, while single-mode fiber core diameters are approximately 8 microns or less. Unlike single-mode fiber, multimode fiber is usually referred to by its core and cladding diameters. For example, fiber with a core of 62.5 microns and a cladding diameter of 125 microns is referred to as a 62.5/125 micron fiber. Popular multimode product offerings have core diameters of 50 microns or 62.5 microns with a cladding diameter of 125 microns. Single-mode fibers also have125 micron cladding diameters.
A single-mode fiber, having a single propagation mode and therefore no intermodal dispersion, has higher bandwidth than multimode fiber. This allows for higher data rates over much longer distances than achievable with multimode fiber. Consequently, long haul telecommunications applications only use single-mode fiber, and it is deployed in nearly all metropolitan and regional configurations. Long distance carriers, local Bells, and government agencies transmit traffic over single-mode fiber laid beneath city streets, under rural cornfields, and strung from telephone poles. Although single-mode fiber has higher bandwidth, multimode fiber supports high data rates at short distances. The smaller core diameter of single-mode fiber also increases the difficulty in coupling sufficient optical power into the fiber. Relaxed tolerances on optical coupling requirements afforded by multimode fiber enable the use of transmitter packaging tolerances that are less precise, thereby allowing lower cost transceivers or lasers. As a result, multimode fiber has dominated in shorter distance and cost sensitive LAN applications.
A number of domestic and international organizations are involved in management of the optical and mechanical parameters of optical fiber in both bare and cabled form, as well as their subsequent application. The common charter of all of these organizations can be distilled into a few salient points. By introducing standardized bounds on the optical parameters of transmission fibers (e.g., modal dispersion, attenuation, cutoff wavelength), system vendors and customers alike can be assured of reasonable degrees of infrastructure capability and consistency, while fiber manufacturers have reasonable flexibility for product improvement and new product development. The broader missions of certification assistance and promotion of international trade apply as well.
Outside of cabling and mechanical specifications, which are equally addressed in the standards, the primary optical specifications are modal bandwidth and attenuation (for multimode fiber), and attenuation, chromatic dispersion, and cutoff wavelength for single-mode fiber. See the glossary for a definition of these terms.
The standards bodies with vested interest in the governance of optical fiber specifications are:
Multimode fiber is used extensively in the campus LAN environment where distances between buildings are 2 km or less. The broad market penetration and acceptance of 62.5/125 micron multimode fiber was initiated by its inclusion in the Fiber Distributed Data Interface (FDDI) standard. FDDI, developed under ANSI in the late 1980s, drove the use of 62.5 micron multimode fiber into the campus LAN environment. The “FDDI grade” multimode fiber specification is currently referenced in many networking standards (such as Ethernet, Token Ring, and ATM) and in the TIA/EIA 568-A cabling standard.
During the development of the FDDI standard, a number of commercially available multimode and single-mode fiber types were considered to meet the FDDI objective of achieving a 100 Mbps data rate (125 Mbaud) over distances of up to 2 km on fiber cable. Multimode was favored over single-mode because it met the 2 km distance objective and had the advantage of lower cost transceivers. At that time, the commercially available multimode fiber types were 50/125, 62.5/125, 85/125, and 100/140 micron. 50/125 micron fiber continues to be popular in Japan and Europe and is supported in the ISO/IEC 11801 standard (Generic cabling for customer premises). The 85/125 micron fibers were based on international initiatives and its use as a LAN fiber alternative. The 100/140 micron product is specified in a number of networking applications and used in military applications.
|50/125 micron||First long haul telecommunications fiber deployed with lasers. Today, it is being used in campus LAN premises applications such as 10GbE; specified in ISO/IEC 11801 and TIA/EIA-568 cabling standards.|
|62.5/125 micron||Due to larger core diameter and higher numerical aperture it couples more light from LED sources than 50/125 fiber. Driven into the market by LED-based systems such as FDDI.|
|85/125 micron||Requirements for this fiber were based on international initiatives and its use as LAN fiber. Sensitive to bending-induced optical losses. Least used multimode fiber.|
|100/140 micron||Used in low data rate, short distance applications. More light coupling into the larger core makes it less sensitive to fiber and connector loss.|
Fiber information carrying capacity is typically rated in terms of a bandwidth length product (MHz-km), which can be used to determine how far a system can operate at what bit rate (e.g., 1 Gbps or 10 Gbps). Naturally, as transmission speed goes up, for a given modal bandwidth, the distance that the signal can travel is reduced.
At transmission speeds up to 622 Mbps (OC-12/STM-4) multimode fiber can be driven with a light emitting diode (LED). However, beyond those speeds an LED can’t turn on and off fast enough and therefore a laser source is required. During the development of the 1 Gigabit Ethernet standard, it was discovered that multimode fiber bandwidth using a laser launch could be lower than the bandwidth when measured with a light emitting diode (LED) launch. To mitigate this affect and achieve acceptable multimode fiber optic operating distances for 1 GbE and 10GbE, specifications had to be created to address the fiber optic transmitter launch conditions, the fiber optic receiver bandwidth, and the fiber cable characteristics.
The IEEE 802.3ae 10 Gigabit Ethernet specification includes a serial interface referred to as 10GBASE-S (the “S” stands for short wavelength) that is designed for 850 nm transmission on multimode fiber. Table 2 provides the wavelength, modal bandwidth, and operating distance for different types of multimode fiber operating at 10 Gbps. Technical issues relating to the use of laser sources with multimode fibers (discussed in the previous section) has significantly limited the operating range of 10GbE over “FDDI grade” fiber. The “FDDI grade” multimode fiber has a modal bandwidth of 160 MHz*km at 850 nm and a modal bandwidth of 500 MHz*km at 1300 nm.
|Description||62.5 micron fiber||50 micron fiber||Unit|
|Modal bandwidth (min)||160||200||400||500||2000||Mhz km|
To address the operating range concern, a new multimode fiber specification had to be created for 10GbE to achieve multimode fiber operating distances of 300 m (as specified in the TIA/EIA-568 and ISO/IEC 11801 cabling standards). This new fiber is referred to by some as “10 Gigabit Ethernet multimode fiber” and is an 850 nm, laser-optimized, 50/125 micron fiber with an effective modal bandwidth of 2000 MHz•km and is detailed in TIA-492AAAC. Its key difference, relative to legacy multimode fibers, are the additional requirements for DMD specified in TIA-492AAAC enabled by a new measurement standard for DMD (TIA FOTP-220). As shown in Table 2, this fiber can achieve 300 m of distance with a 10GBASE-S interface. Many leading optical fiber vendors are actively marketing this new multimode fiber for 10GbE applications.
There are two major factors which will likely drive use of this new “10GbE multimode fiber”: 1) the popularity of short reach (300 m or less) 10GbE applications and 2) the cost of 10GBASE-S interfaces relative to the others. Evidence of the popularity of low cost, short distance 850 nm multimode Ethernet applications can be found in the number of 1000BASE-SX ports shipped for 1 Gigabit Ethernet. 1000BASE-SX operates up to 550 meters on multimode fiber and has garnered a large percentage of the total number of 1 GbE switch ports shipped. Ultimately the marketplace will determine the popularity of “10GbE multimode fiber”. The alternative is to use single-mode fiber over a 10GBASE-L or 10GBASE-E interface or the 10GBASE-LX4 interface, which supports both single-mode and multimode fiber over distances of 10 km and 300 m, respectively.
There are four different types of single-mode fiber in popular use today (as of the writing of this paper, May 2002). They are summarized in Table 3. The ITU-T Series G.652 recommendation, commonly referred to as standard single-mode fiber, represents the majority of the installed base of single-mode fiber. The G.652 recommendation describes both standard single-mode fiber (IEC type B1.1) and low water-peak standard single-mode fiber (IEC type B1.3). The performance data in the 10GbE standard is based upon the use of standard single-mode fiber type B1.1 and B1.3 or in other words the overall G.652 recommendation. This does not however preclude the use of other types of single-mode fiber with 10GBASE-E since their use may potentially enhance the performance of a 10GbE link.
Standard single-mode fiber is essentially a thin core (5-8 microns) of Germanium-doped glass surrounded by a thicker layer of pure glass and is the overwhelming workhorse of the optical communications infrastructure. Nearly any application can be addressed with standard single-mode fiber, but it is optimized to support transmission at 1310 nm. Performance issues with standard single-mode fiber can become more significant as higher data rates (such as 10 Gbps) and longer distances (>40 km) are encountered. Low water-peak standard single-mode fiber (IEC type B1.3) has the same dispersion characteristics as standard single-mode fiber (IEC type B1.1), but has reduced attenuation in the region of the water peak (nominally 1383 nm). As no specification is given for water-peak attenuation in standard single-mode fiber (IEC type B1.1), attenuation in the region of 1383 nm can be significantly higher than that at 1310 nm. By reducing the water impurities introduced in this region during the time of manufacture, low water-peak standard single-mode (IEC type B1.3) fiber provides identical support to standard singlemode fiber, plus can support additional wavelengths between 1360 and 1460 nm.
Note again that the IEEE 802.3ae specification for 10 Gigabit Ethernet assumes standard single-mode fiber (IEC types B1.1 and B1.3) for all single-mode performance specifications. Additional fiber types (e.g., DSF, NZDSF) may offer benefits beyond the constraints of the standard, but are not required to meet any specifications detailed within the 10GbE standard.
Dispersion shifted fiber (DSF) was introduced in the mid 80’s and represents a very small percentage of the installed base of single-mode fiber. The need for DSF was driven by the development of 1550 nm lasers which have much less fiber attenuation than 1310 nm lasers. DSF allowed optical signals to travel significantly farther without the need for regeneration or compensation due to reduced chromatic dispersion characteristics, effectively allowing an optical pulse to maintain its integrity over longer distances. DSF was well suited to meet these needs for single-channel optical transmission systems. However, with the advent of broadband optical amplifiers and wavelength division multiplexing (WDM) the chromatic dispersion characteristics of DSF presented detrimental effects to multiple wavelength signal integrity. As a result, a new type of fiber was needed, namely non-zero dispersion shifted fiber (NZDSF). NZDSF effectively obsoleted DSF, and thus DSF is no longer commercially offered. DSF is not referred to in the IEEE 802.3ae specification.
Cutoff shifted single-mode fiber is designed to allow for extended transmission distances through lower attenuation and the ability to support higher power signals. This fiber is typically used only for transmission in the 1550 nm region due to a high cutoff wavelength around 1500 nm. Due to significant manufacturing complexity, cutoff shifted single-mode fiber is typically much more expensive than other single-mode fiber types. It is commonly found only in submarine applications due to the stringent requirements in such an environment, and is not likely to be encountered in situations where 10 Gigabit Ethernet transport solutions will be deployed. Cutoff shifted fiber is not referred to in the IEEE 802.3ae specification.
Non-zero dispersion shifted fiber (NZDSF) was introduced in the mid 90’s to address issues encountered with multiple wavelength transmission over DSF by maintaining a finite amount of chromatic dispersion across the optical window (typically 1530-1625 nm) commonly exploited by wavelength division multiplexing (WDM). The primary concern addressed by NZDSF is a nonlinear effect known as four wave mixing (FWM). In simple terms, three wavelengths carrying different information can generate signals at another wavelength. In the regularly spaced channel plan of most WDM systems (usually 1.6 nm or less between adjacent wavelengths), the newly generated noise signals can overlap with a wavelength carrying live traffic. NZDSF mitigates this effect by ensuring that all wavelengths in the region of interest (1530-1625 nm) encounter some finite dispersion and thus signals on adjacent wavelengths will not overlap in time for extended periods. Four wave mixing is reduced as the time during which adjacent wavelength signals overlap is shortened. The reduced chromatic dispersion of NZDSF can also reduce the detrimental contributions of other nonlinear effects such as self-phase modulation (SPM) and cross-phase modulation (XPM). NZDSF is optimized for transmission in the 1530-1625 nm window, but can support some 1310 nm configurations with proper consideration given to laser type and systemconfigurations.
The IEEE 802.3ae specification makes a brief reference to NZDSF as follows: “It is believed that for 10GBASE-E, type B4 (NZDSF) fiber with positive dispersion may be substituted for B1.1 or B1.3 (standard single-mode fiber). A link using B4 (NZDSF) fiber with negative dispersion should be validated for compliance at TP3”.
|Name||ITU-T||IEC Reference||Optimized Dispersion Range (nm)||Referred to in 802.3ae Specification?|
|Standard Single-Mode Fiber(Dispersion Unshifted Fiber)||G.652||IEC 60793-2(B1,1/B1.3)||1300-1324||Yes|
|Dispersion Shifted Fiber (DSF)||G.653||IEC 60793-2(B2)||1500-1600||No|
|CutOff Shifted Fiber||G.654||IEC 60793-2(B1.2)||1550-1625||No|
|Non-Zero Dispersion Shifted Fiber (NZDSF)||G.655||IEC 60793-2(B4)||1530-1565 (C-band)1565-1625 (L-band)||Yes|
Standard single-mode fiber can address nearly any application, depending on the level of cost and complexity that an operator is willing to employ. The latter issues become more significant as higher data rates, different wavelengths, and/or longer distances are adopted.
For short fiber spans, optical transmission at 1310 nm remains an appealing option due to the price and availability of lasers at this wavelength. Several factors drive consideration of transmission at higher wavelengths, however. At higher data rates, requirements on receiver sensitivity typically grow more stringent, requiring higher received optical powers to maintain low error rates. Due to relatively high fiber attenuation at 1310 nm (see Table 4), maximum allowable transmission distances are reduced at 1310 nm compared to 1550 nm. At extended distances, which exceed the allowable sensitivities of optical receivers, signals in the 1550 nm region can be optically amplified (usually with an EDFA) whereas optical amplification is not commonly available at 1310 nm. As a result, 1310 nm transmission requires electrical regeneration, which is fundamentally more expensive than optical amplification.
|WaveLenght (nm)||Maximum fiber attenuation per IEC 60793-2 (dB/km)||Typical cabled attenuation (dB/km)|
Optical pulses carrying digital information comprise a finite spectrum of wavelengths (not just one infinitely narrow wavelength).Since different wavelengths travel at different velocities in an optical fiber, the individual components of a single pulse will spread as the pulse propagates. Eventually, adjacent optical pulses will overlap with one another and the signal will become excessively degraded. At 1310 nm, attenuation will degrade a signal transmitted over standard single-mode fiber before chromatic dispersion becomes a problem. As a result chromatic dispersion is not an issue for 10 Gbps data rate transmission at 1310 nm over standard single-mode fiber. However, at 1550 nm, increased chromatic dispersion in standard single-mode fiber becomes the significant limiting factor, typically limiting 10 Gigabit Ethernet transmission to 40 km, although this specification is also dependent on the choice of transmitter. Beyond the dispersion limited distance of standard single-mode fiber, a signal requires either electrical regeneration or some means of optical dispersion compensation. DSF and NZDSF have reduced chromatic dispersion in the 1550 nm region, thus extending the allowable distance before regeneration or optical dispersion compensation would otherwise be required.
A routinely cited potential impact on 10 Gbps applications is the influence of Polarization Mode Dispersion (PMD) introduced by some installed fiber infrastructures. PMD effectively separates an optical signal into two identical signals, which propagate down a fiber at different speeds. If the two components are significantly separated when a signal is finally received, the encoded information can be considerably deteriorated. Most optical fibers that comply to the current G.652 (standard single-mode fiber) and G.655 (non-zero dispersion shifted fiber) standards are suitable for 10 Gbps transmission in WAN-size applications. However, there are potential issues with older infrastructures, particularly those that contain fiber installed prior to the 90’s. Some optical fiber manufactured prior to this time had acceptable PMD characteristics, although the lack of PMD performance requirements in an industry standard allowed for significant variation between vendors and their various manufacturing techniques. In fact, the necessity for standardization was precipitated in large part by the discovery of very poor PMD performance with fiber manufactured by one major supplier. Although standardization of PMD largely solved the problem, a significant amount of fiber installed prior to the early ‘90s remains unlit and poses potential problems with 10 Gbps deployment. The situation is significant enough to warrant several major carriers to require PMD testing on any network link being considered for 10 Gbps transport. PMD remains a significant focus in optical fiber development as ultra-high data rates (40 Gbps and above) are considered.
Key factors to consider in the design of 10 Gigabit Ethernet networks are:
When designing individual fiber links, the first step is the characterization of the link power budget. This value (expressed in dB) is specified in the 10GbE standard for each optical interface. Tables for all interfaces are shown in this section. The link power budget is calculated by taking the difference between the minimum transmitter power launched into the fiber, and the minimum receiver sensitivity (Figure 2). The receiver sensitivity is the minimum amount of power that is necessary to maintain the required signal-to-noise ratio over the specified operating conditions. The link power budget determines the amount of total loss due to attenuation and other factors that can be introduced between the transmitter and the receiver.
Figure 2: Link Power Budget
The link power budget is applied to account for the channel insertion loss and power penalty. Channel insertion loss is the key parameter and is defined to address the cable and connector losses (Figure 3). The channel insertion loss consists of the specified cable loss for each operating distance, splice losses and the loss of two connections. A connection consists of a mated pair of optical connectors. An allocation of 1.5 dB is budgeted for connector and splice losses for multimode fiber and 2 dB for single-mode fiber. For 10 Gigabit Ethernet applications a power penalty is allocated to the link power budget. This power penalty takes into account effects such as dispersion that may cause inter-symbol interference and therefore degrade an optical signal.
The 10 Gigabit Ethernet operating distances provided in the tables below are limited by the channel insertion loss, the cable bandwidth for multimode fiber, and the optical transceiver characteristics (i.e., PMD types). 10GBASE-E distances greater than 30 km are considered “engineered links” because to support those distances the attenuation of the cable needs to be less than the maximum specified for standard single-mode fiber (Table 4). Therefore, distances greater than 30 km for installed cabling should be “field-tested” for verification of conformance to the 11 dB (Table 7) channel insertion loss specification. Insertion loss measurements of installed fiber cables are made in accordance with ANSI/TIA/EIA-526-14A/ method B and ANSI/TIA/EIA-526-7/method A-1.
Table 5: 10GBASE-S link power budget as per IEEE Draft P802.3ae/D5.0
|62.5 micron MMF||50 micron MMF|
|Modal Bandwidth at 850nm||160||200||400||500||2000||Mhz*km|
|Link power budget||7.3||7.3||7.3||7.3||7.3||dB|
|Channel insertion point *||1.6||1.6||1.7||1.8||2.6||dB|
|Power penalty **||4.7||4.8||5.1||5.0||4.7||dB|
* These channel insertion loss numbers are based on a wavelength of 850 nm
** These power penalties are based on a wavelength of 840 nm
Table 6: 10GBASE-L link power budget as per IEEE Draft P802.3ae/D5.0
|Link power budget||9.4||dB|
|Channel insertion point *||6.2||dB|
|Power penalty **||3.2||dB|
* These channel insertion loss numbers are based on a wavelength of 1310 nm
** These power penalties are based on a wavelength of 1260 nm
Table 7: 10GBASE-E link power budget as per IEEE Draft P802.3ae/D5.0
|Link power budget||15.0||dB|
|Operating distance||30||40 ***||km|
|Channel insertion point *||10.9||10.9||dB|
|Power penalty **||3.6||4.1||dB|
* These channel insertion loss numbers are based on a wavelength of 1550 nm
** These power penalties are based on a wavelength of 1565 nm and other penalties
*** Greater than 30 kilometers distance mandates an "engineerd link" requiring "field testing" for verification of conformance to the 11 dB channel insertion loss specification. Insertion loss measurements of installed fiber cables are made in accordance with ANSI/TIA/EIA-526-14A/method B and EANSI/TIA/EIA-526-7/Method A1
Table 8: 10GBASE-LX4 link power budget as per IEEE Draft P802.3ae/D5.0
|62.5 micron MMF||50 micron MMF||SMF|
|Modal bandwidth as measured at 1300 nm (minimum, overfilled launch)||500||400||500||-||Mhz*km|
|Link power budget||7.5||7.5||7.5||8.2||dB|
|Channel insertion point *||2.0||1.9||2.0||6.2||dB|
|Power penalty **||5.0||5.5||5.5||1.9||dB|
* These channel insertion loss numbers are based on a wavelength of 1300 nm for multimode and 1310 for single mode. An offset launch pad cord is assumed. The total insertion loss, when including the attenuation of the offset launch patch cord is allowed to be 0.5 dB higher than shown in the table.
** These power penalties are based on a wavelength of 1269 nm and other penalties
Table 9: 10GbE supported fiber and distances
|Fiber||62.5 micron MMF||50 micron MMF||SMF|
|Mhz*km||160 *||200||400||500||2000 *||-|
|SR/SW 850 nm||26m||33m||66m||82m||300m||-|
|LR/LW 1310 nm||-||-||-||-||-||10 km|
|ER/EW 1550 nm||-||-||-||-||-||40 km|
|LX4 1310 nm||300m @ 500Mhz * km (***)||240m||300m||-||10 km|
* Commonly referred to as "FDDI Grade Fiber"
** Sometimes referred to as "10 Gigabit Ethernet Multimode Fiber", detailed in TIA-492AAAC
*** 62.5 micron multimode fiber has a model bandwidth of 500 Mhz*km at 1300 nm as opposed to 160 or 200 Mhz*km at 850nm
When designing 10GBASE-E links greater than 30 km (i.e., the cable is not already installed) a cabling link-loss calculation, which is a simple arithmetic process, is used to make sure the combined loss of the cabling components in the link does not exceed the 11 dB channel insertion loss allocated for 10GBASE-E (Table 7). The cabling link-loss is calculated by adding the connector and splice loss to the cable loss. The cable attenuation for the link is calculated by multiplying the link distance by the loss per unit distance specified for the fiber (e.g., dB/km).
As shown in Table 10 (scenario 1) given a cable attenuation of 0.225 db/km, the cable attenuation for a 40 km link is 9 dB (40 km x 0.225 = 9 dB). Assuming 2 dB for single-mode fiber connector and splice losses the link-loss is 11 dB (9 dB + 2 dB = 11 dB); which is an allowable channel insertion loss for 10GBASE-E (Table 7) and would insure that this link can achieve 40 km. A similar calculation can be done for scenario 2 and 3.
Table 10: 10GBASE-E link-loss calculation examples
|Parameter||Scenario 1||Scenario 2||Scenario 3|
|Channel insertion point||11dB||11dB||11dB|
|Required attenuation loss||0.225 dB/km||0.225 dB/km||0.3 dB/km **|
|Connector and splice loss||2 dB||2 dB||2 dB|
|Maximun distance||40 km||35 km||30 km|
* The 10BASE-E channel shall have attenuation between 5 and 11 dB. If required an attenuator can be added to comply with this specification
** This is the maximum fiber attenuation allowed for standerd single mode fiber at 1550 nm as per IEC 60793-2. See table 4 for details.
As with previous generations of Ethernet, 10 Gigabit Ethernet requires a network designer to thoroughly understand the capabilities of his/her fiber infrastructure. With 10GbE new challenges and considerations have emerged such as the effects of chromatic and polarization mode dispersion on signal integrity. In addition, decisions may have to be made regarding whether to use single-mode or multimode fiber. This paper has introduced some basic fiber related concepts and outlined some of the key points to understand and consider when designing a 10 Gigabit Ethernet network.
Reduction in transmitted optical power. Attenuation as a function of distance in optical fiber is logarithmic. Attenuation as a function of optical wavelength is dominated by the degree to which light is scattered by the molecular structure of the optical fiber (“Rayleigh scattering”).
Chromatic dispersion is a measure of the time based broadening which occurs in pulses of light as they propagate along a length of fiber. The spectrum of the optical light pulsed from a transmitter into a fiber includes multiple wavelengths; not just a single wavelength. Chromatic dispersion is caused when different wavelengths of light within the pulse propagate at different velocities. The delay difference between the wavelengths transmitted and those received results in a broadening of the optical pulse. Chromatic dispersion impairs the recovery of the data signal. The wavelength at which the dispersion is minimized (approximately zero) is referred to as the zero-dispersion wavelength, characterized by the symbol lo. Chromatic Dispersion is the most distinguishing difference between the applicable ITU single-mode fiber types. Chromatic dispersion is typically expressed in ps/nm/km (picoseconds of pulse spreading, nanometers of optical wavelength, kilometers of fiber traveled).
At the same center wavelength a broad-spectrum source, like a light emitting diode (LED), will produce much more chromatic dispersion than a narrow spectrum source, like a laser. Chromatic dispersion is the principal dispersion component of singlemode fiber systems, while modal dispersion dominates in laser-based multimode fiber systems.
Above this wavelength, optical signals are single-mode. Signals are multi-mode below the cutoff wavelength. The cutoff wavelength for cabled fiber is lower than that for bare fiber due to mechanical stresses exerted on the fiber by the cabling process. For standard single-mode fiber, the standardized (IEC and ITU) cutoff wavelength for cabled fiber is below 1260 nm. With fiber designed for single-mode applications, transmission at wavelengths below the cutoff is rarely if ever attempted, as bandwidth and distance are significantly reduced and more optimal multimode performance at shorter wavelengths is achieved with fibers designed specifically for that purpose.
The difference in the time delay between modes is called differential mode delay. The tuning of these modes is quantified by thedifferential mode delay measurement.
An amplifier which boosts the optical power of a signal without the need for electrical regeneration. An EDFA, in simple terms, is a length of optical fiber doped with Erbium and “pumped” by a shorter wavelength laser. Information bearing signals transmitted through the doped fiber will have additional energy imparted to them by the excited Erbium, thus increasing their optical power. EDFAs are only effective in the higher wavelength regions (typically 1525-1625 nm).
The generation of light at a new wavelength due to the interaction of transmitted signals at two or more wavelengths. Efficient four wave mixing requires proper phase matching, where signals at adjacent wavelengths are essentially coincident in time.
The time it takes for light to travel through a fiber is different for each mode resulting in a spreading of the pulse at the output of the fiber referred to as intermodal dispersion or intermodal distortion. This mainly applies to multimode fiber.
Measure of the highest frequency signal that can be supported over a given distance of multimode fiber, as limited by modal dispersion. Modal bandwidth is typically expressed in MHz*km.
The MFD is used to describe the distribution of the optical power in a fiber by providing an “equivalent” diameter, sometimes referred to as the spot size
Variations in optical properties of an optical fiber as a function of optical power. For example, a high-powered optical pulse can induce changes in the chromatic dispersion of an optical fiber.
Difference in propagation velocity between different optical polarization states. An optical signal can be represented by two orthogonally polarized components, each of which will travel at different velocities due to inherent geometric flaws in a length of optical fiber. Since receivers used in optical communications do not discriminate between different polarization states, the two delayed polarization components will be mixed at the receiving end. This mainly applies to single-mode fiber.
1. “Bellcore’s fiber measurement audit of existing cable plant for use with high bandwidth systems”, J. Peters, A. Dori, and F. Kapron, Proceedings of NFOEC 1997.
2. “PMD assessment of installed fiber plant for 40 gbit/s transmission”, P. Noutsios and S. Poirier, Proceedings of NFOEC 2001.