• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

network speed vs bandwidth

Joined
May 21, 2009
Messages
4,966 (0.85/day)
System Name i7-PC / HTPC / iMac
Processor i7 3820 / Phenom II 940
Motherboard GIGABYTE G1.ASSASSIN2 / M3A79-T Deluxe
Cooling Corsair Hydro H100i / Scythe II (HS only)
Memory G.SKILL Trident X Series 8GB (2 x 4GB) DDR3 1600mhz / 4GB DDR2 1066 (@800) Corsair Dominator
Video Card(s) GB Radeon HD 7950s 3GB / GB Radeon HD 7950s 3GB
Storage 2x 80GB Intel X-25, 2x600gb SATA, 1x1tb 5400RPM storage /1x600GB, 3x500GB,1x160,1x120 SATA
Display(s) 1x 27" Yamakasi / Vizio 42" HDTV
Case Lian Li Lancool PC-K58 / Antec 900
Audio Device(s) HT Omega Striker 7.1 / Onboard and HDMI from ATi Card
Power Supply PC Power & Cooling 750W / 610W
Software Ubuntu / Windows 8.1 Pro / OS X / PHPStorm / Gaming
right? So from class 10 yrs ago I remember that bandwidth is technically not a speed rating. it's more akin to how much water can fit through a hose rather than how fast the water is flowing. ok, that makes sense and I accepted that.

but the other day I thought of a question i should have had back then, what is the speed? would it be the transmission speed of electrons through the medium? would it be something else I haven't thought of?

if bandwidth is HOW MUCH, what is HOW FAST?
 
The speed is basically the speed of light, give or take a little based on certain variables.
 
Bandwidth is peak or maximum bit rate and speed is actual bit rate and it varies during transmission. They are both measured in bits per second and bandwidth is actually maximum speed. Confusion comes from signal processing where bandwidth means frequency range. This is a classic example when a term is reused in different context.
 
if bandwidth is HOW MUCH, what is HOW FAST?

Bandwidth describes how much data can be transferred. Latency describes the time for packets to make a round trip or one-way trip (depends on the packet type and the test) from where you are to any given server.

So, bandwidth is how much, latency is how fast.

Bandwidth is measured in bits or bytes (including various SI prefixes, latency is measured in time (milliseconds, typically.)
 
latency is how fast.

This is true for transmission initiation, after that it's all about how much bandwidth can you actually use (is the transfer speed limited by server's available bandwidth for one user at that time or is it limited by user's bandwidth)

edit: bah, disregard this, it's not that simple - every tcp packet has it's latency
 
Last edited:
Bandwidth describes how much data can be transferred. Latency describes the time for packets to make a round trip or one-way trip (depends on the packet type and the test) from where you are to any given server.

So, bandwidth is how much, latency is how fast.

Bandwidth is measured in bits or bytes (including various SI prefixes, latency is measured in time (milliseconds, typically.)

This. Your best parallel would be satellite Internet Service. They can have decent bandwidth of up to 3.0 MB/s but have terrible speed. It takes about 6.2 seconds for transmission from the ground to reach a satellite and back. Once you add in time it takes for signal to reach the transmitter, convert the signal, received, decode, etc. It ends up giving you a latency averaging around 850 ms. Imaging trying to play an FPS with a latency of 850 ms.

While you could have the bandwidth needed, the speed will hinder you. People get this confused because when you download something, you are giving bandwidth and time. This is because latency doesn't play a factor in large files as the data will be continuous. It is the small burst of data were speed comes into play.
 
I think this can get confusing because the original definition was-
"(1) A range within a band of frequencies or wavelengths.", which made it technically incorrect for digital throughput.

Then they added the definition-
"(2) The amount of data that can be transmitted in a fixed amount of time. For digital devices, the bandwidth is usually expressed in bits per second(bps) or bytes per second. For analog devices, the bandwidth is expressed in cycles per second, or Hertz (Hz). "

So you could say that gigabit ethernet has 1Gb/s of bandwidth or 1Gb/s of throughput and still be correct. However there are some older guys out there still who don't totally agree with the definition being added and may even say something about you being wrong.
 
This is true for transmission initiation, after that it's all about how much bandwidth can you actually use (is the transfer speed limited by server's available bandwidth for one user at that time or is it limited by user's bandwidth)

Yeah, I'm assuming you're not saturating your network prior to testing it, but even still the definition still stands. If you're saturating your network and you do a ping, your latency will properly reflect the added latency from the used bandwidth. Your modem can only send so many packets at once and a ping will properly describe the respond time for any packet at any particular instant.

Regardless of the load though, latency still describes the response time. The fact that a loaded modem will respond less quickly means nothing. Latency is still the measurement of the response time of a packet switched network. How much bandwidth you use might impact it, but it doesn't determine it considering packet shaping will introduce latency once you've reached your bandwidth cap and most ISPs will shape your traffic.

This is all splitting hairs though.

The real thing to take away is that latency is the measurement of "how fast".


This. Your best parallel would be satellite Internet Service. They can have decent bandwidth of up to 3.0 MB/s but have terrible speed. It takes about 6.2 seconds for transmission from the ground to reach a satellite and back. Once you add in time it takes for signal to reach the transmitter, convert the signal, received, decode, etc. It ends up giving you a latency averaging around 850 ms. Imaging trying to play an FPS with a latency of 850 ms.

In some places this has gotten better. I've seen people with satellite internet with response times closer to 350-400ms. I think this is getting better, but it really depends how how it's setup. The "satellite" might actually be a station on top of a mountain instead of a real satellite which would cut down on the distance the signal has to travel.
 
but then latency is a measure of how long it took, which still isn't exactly the speed is it? it's much closer than bandwidth, but the speed should read as "XYZ m(km)/sec" no?
 
Bandwidth = Amount of Data/Time
Speed = Distance/Time

Latency is not the measure of speed, it is the measure of time. So to figure out the speed you'd have to figure out the distance the ping request has to travel.

So lets say you have a 100ft length of cable connecting your computer to the router. And you ping the router with a result of 0.01ms.

The Speed would be 200ft/0.01ms or 20,000ft/ms. You use 200ft because pings are round trip times, so the distance traveled is actually 200ft.

Obviously I just made these numbers up, but I'm just saying that is how you would calculate speed.
 
Last edited:
that makes sense, thanks newt :)
 
So ping test a local server and do the math based on the distance it provides. I am curious about what you would get.
 
The problem is that assumes a straight run to the server, cabling doesn't work like that. It would give you a rough guess, but it still would be way off. But we'll try it anyway, for science!

2706202921.png


So ~100Miles/9ms, so that calculates out to ~11,111.11 Miles/s or ~39,999,996 Miles/h. If my math is correct.
 
Speed is a misnomer in computing because it is rate at which electrons/photons travel and are processed. Bandwidth is given as bytes per second (Hz). Even though the bandwidth may exist doesn't necessarily mean it is used thanks to the fragmentary nature of network packets.

To say a certain network is "fast," implying a speed, is to mean a network has "high bandwidth." To say a certain network is "slow" is to mean a network has "low bandwidth." In both cases, this is always relative to the network load. For a single packet, a 56K network could be as "fast" as a gigabit network because the packets arrive at about the same time. On the other hand, if you're sending a billion packets, 56K will quickly be called "slow" because it doesn't have near the bandwidth of the gigabit network.


So ~100Miles/9ms, so that calculates out to ~11,111.11 Miles/s or ~39,999,996 Miles/h. If my math is correct.
Less all the switches and hubs it hit on the way and less the fact that communications are rarely in a straight line (I figure that's why you did 100 instead of 50). Basically that tells you an average speed of the path the packets took. Needless to say, it doesn't take long for a packet to circumnavigate the Earth because it is mostly on ridiculously high bandwidth fiber optic cables. Sending a message to Pluto, on the other hand, would take many minutes--or even using a satellite.
 
Last edited:
Less all the switches and hubs it hit on the way and less the fact that communications are rarely in a straight line (I figure that's why you did 100 instead of 50). Basically that tells you an average speed of the path the packets took. Needless to say, it doesn't take long for a packet to circumnavigate the Earth because it is mostly on ridiculously high bandwidth fiber optic cables. Sending a message to Pluto, on the other hand, would take many minutes--or even using a satellite.

Actually I did 100 instead of 50 because pings are a round trip measurement. For the calculations I ignored the fact that the path traveled isn't straight. Due to the way cabling is done the actual distance traveled by each packet could have easily doubled though for sure.
 
Speed is a misnomer in computing because it is rate at which electrons/photons travel and are processed. Bandwidth is given as bytes per second (Hz). Even though the bandwidth may exist doesn't necessarily mean it is used thanks to the fragmentary nature of network packets.

To say a certain network is "fast," implying a speed, is to mean a network has "high bandwidth." To say a certain network is "slow" is to mean a network has "low bandwidth." In both cases, this is always relative to the network load. For a single packet, a 56K network could be as "fast" as a gigabit network because the packets arrive at about the same time. On the other hand, if you're sending a billion packets, 56K will quickly be called "slow" because it doesn't have near the bandwidth of the gigabit network.



Less all the switches and hubs it hit on the way and less the fact that communications are rarely in a straight line (I figure that's why you did 100 instead of 50). Basically that tells you an average speed of the path the packets took. Needless to say, it doesn't take long for a packet to circumnavigate the Earth because it is mostly on ridiculously high bandwidth fiber optic cables. Sending a message to Pluto, on the other hand, would take many minutes--or even using a satellite.

To some extent that is true, it certainly is with all modern forms of physical communication. 56k was slow to respond, even with one packet, but that was just because of the nature of the phone system. So a ping on 56k with a single packet could still take 250-300ms where on cable, dsl, or fiber it could be closer to 80-100ms or lower depending on the location of the server and the quality of the broadband and 56k.
 
To some extent that is true, it certainly is with all modern forms of physical communication. 56k was slow to respond, even with one packet, but that was just because of the nature of the phone system. So a ping on 56k with a single packet could still take 250-300ms where on cable, dsl, or fiber it could be closer to 80-100ms or lower depending on the location of the server and the quality of the broadband and 56k.

ADSL was ran on POTS like Dialup was. VDSL is much much faster yet runs on the same lines DialTone does.
 
To some extent that is true, it certainly is with all modern forms of physical communication. 56k was slow to respond, even with one packet, but that was just because of the nature of the phone system. So a ping on 56k with a single packet could still take 250-300ms where on cable, dsl, or fiber it could be closer to 80-100ms or lower depending on the location of the server and the quality of the broadband and 56k.
Correct me if I'm wrong but 56K is analog in the voice frequency (~20 KHz). Its response time was slow because of that. DSL runs in the 100s of KHz and is a digital signal versus analog.

56K was a bad example.

Actually I did 100 instead of 50 because pings are a round trip measurement. For the calculations I ignored the fact that the path traveled isn't straight. Due to the way cabling is done the actual distance traveled by each packet could have easily doubled though for sure.
Also true. Stupid me.
 
Speed is either adequate or inadequate. Not trying to be funny. If the latency is great, but the data rate is huge, it may matter or it may not matter.
 
Correct me if I'm wrong but 56K is analog in the voice frequency (~20 KHz). Its response time was slow because of that. DSL runs in the 100s of KHz and is a digital signal versus analog.

QAM and its variants and lattice compression.

All DSL and Cable have done is increase the number of signals through better and more complex compression that we were unable to cost effectively perform years ago by moving up to 64, 128, and 256 bit QAM and using lattice compression to bit check the data with minimal overhead in processing to reduce latency.

This is also why some media types are slower, they cannot be compressed or the compression applied to them results in unacceptable artifact.

Plus the advances in termination and load calibrations and much else has been huge in cleanliness of the signal, thus costing fewer parity check bits and larger QAM "words".


OP


You are looking for two separate things, first is your peak theoretical bandwidth. As in , I purchase a slice of a pie that can reach a theoretical peak of lets say 100MBps. However every residential connection in the US is oversold, ISP's know users will rarely use all the bandwidth they buy all the time, so if we follow the 70% bend of the knee rule we can oversell a 1Gb connection by 30% with no real noticeable degradation of service, most seem to follow a 50% or lower bend of the knee rule though.

So lets say you are in a perfect world and your connection is not being throttled by your ISP or oversold. We then find the next weakest link in the chain, usually the router you use, or modem and router assembly. Since they are trying to get by with good service but as cheaply as possible the amount of processing the modem can do is generally limited, and once a connection limit(not physical, but port/IP/service connection aggregation) is reached delay may be added as the packet waits for the CPU to route the packet to the originating client/server.

The hardware limitation of bandwidth in routers is known as backplane bandwidth.

For example a 8 port Gb switch really only needs 4Gb of backplane bandwidth to service all 8 ports at full speed, but depending on MTU this may require more storage and processing than is cost effective in the $29 piece of hardware, so you may only get 2Gb, or 1Gb, meaning that when two other computers are transferring data at high rate your internet may slow down in latency and throughput.

I reviewed a set of power-line Ethernet adapters and they added 3ms to my latency, and were only capable of 30Mbps despite having a higher rating.

If you want to know your absolute "ping" or turnaround times start with your internal network, then start moving out to the local ISP subnet, then to usually their primary node, and then to a webserver. Average a few runs with each and subtract the numbers from your network to the ISP's last node to determine where and if a problem is.


So for example.


ping 192.168.0.1 (local router interface) 1ms
ping 111.10.10.10 (your modem WAN IP) 3ms
ping 111.0.0.1 (local subnet gateway for ISP) 7ms
ping 123.456.789.12 (ISP primary gateway) 13ms
ping www.google.com 46ms


Subtract the first 4 numbers, and do this test while your network is in use, and also off primary hours for the ISP and during.
 
Last edited:
Back
Top