TF2 Network Graph

From Valve Developer Community
Jump to: navigation, search
Dead End - Icon.png
This article has no links to other VDC articles. Please help improve this article by adding links that are relevant to the context within the existing text.
January 2024


The Source engine used in the Orange Box games has an enhanced version of the net graph.

Net Graph ConVar control

net_graph
Base command to configure how net_graph is displayed
  • 0 = No graph (default)
  • 1 = Draw basic netgraph (text only) [areas 3, 4, 6 and 7]
  • 2 = Draw data on payload as well as latency and interpolation graphs [areas 8 and 9]
  • 3 = Draw payload legend [area 1] and packet loss percentage and choked packet percentage [area 4]
  • 4 = Draws the server perf statistics area [area 5]
net_graphheight
Height of the latency/text area of the net graph
net_graphmsecs
The latency/text area of the net graph represents this many milliseconds
net_graphpos
Where to position the graph. It is always at the bottom of the screen.
  • 0 = left edge
  • 1 = right edge
  • 2 = centered
  • 3 or higher specifies the X co-ordinate of the graph's left edge
net_graphproportionalfont
Determines whether netgraph font is proportional or not (i.e., whether it tries to use larger fonts when running at higher resolutions)
net_graphshowinterp
Draw the interpolation graph portion [area 9]
net_graphshowlatency
Draw the ping/packet loss graph portion [area 8]
net_graphsolid
Draws height ticks as full vertical rectangles, rather than just single ticks [was more important for Half-Life 1/Counter-Strike 1.6 engine since it had a software renderer]
net_graphtext
Draw text fields
net_scale
Varies the scale of the payload portion of the net_graph (units are bytes per pixel). Each green hash mark at the left edge of the payload area represents 50 bytes [see marker "b"], each faint gray tick represents 10 bytes (if scale is sufficiently low)

Areas of the Net Graph Display

Area 1

This area is the legend for the colors used in the payload section of the graph. If a part of the payload arrives but doesn't fit into one of the predetermined buckets, then it is represented in the clear area between the last color and the little white dot the represents the full packet size [see indicator "a" in image].

Area 2

For packets greater than 300 bytes which are in the 95th percentile, the size of the packet is rendered in text at the top of the payload area [see marker 2]. Note that the Orange Box tech uses compression on the packets, but the sizes reported in the net_graph payload are based on the decompressed payload size.

Area 3

The local connection's frames per second and round trip ping to the server are shown in area 3.

Area 4

This area shows the current bandwidth usage. The in/out show the size in bytes of the last incoming and outgoing packet. The k/s shows the kilobytes per second (rolling average) recently seen in each direction.

Area 5

This area shows the performance of the server the client is connected to. The "sv" tag shows the fps of the server as of the latest networking update delivered to the client. The "var" shows the standard deviation of the server's frametime (where server fps = 1.0 / frametime) over the last 50 frames recorded by the server. If the server's framerate is below 20 fps, then this line will draw in yellow. If the server's framerate is below 10 fps, then this line will draw in red.

Area 6

The "lerp" indicator shows the number of msecs of interpolation being used by the client. Some notes on the value of lerp follow below.

Area 7

This area shows the user's current cl_updaterate setting, the actual number of updates per second actually received from the server, the actual number of packets per second sent to the server and the user's cl_cmdrate setting (which is the user's desired packets per second to send to the server).

Area 8

When net_graphshowlatency is 1, this area shows a historical view of the latency of the connection. The height (indicated by marker "d") corresponds to net_graphmsecs time (actually there is a bit of headroom after net_graphmsecs at the top for the text fields to fit into). Red vertical lines indicate dropped packets from the server down to the client. If the graph shows a yellow marker (such as at marker "c"), this indicates that the server had to choke back one or more packets before sending the client an update.

Area 9

When net_graphshowinterp is 1, this area shows for each client frame how much interpolation was needed. If there is a large gap between packets (packet loss, server framerate too low, etc.), then the client will have insufficient data for interpolation and will start to extrapolate. The extrapolation is shown as orange bars rising up above the while line (a run of extrapolation can be seen just to the left of the 9 marker). In addition, the very bottom pixel indicates whether a CUserCmd ("usercmd") packet was sent on that rendering frame, or held back by the client and aggregated due to the user's cl_cmdrate setting.

How the various networking ConVars work

The Source engine, by default, runs internal simulation at a tick interval of 15 msecs (66.67 ticks per second). Each client "usercmd" is a 15 msec timeslice (which makes it possible for prediction to work, since the inputs on both the client and server should result in the same outputs in most cases). The cl_cmdrate ConVar determines how many physical packets per second the client will try and send to the server. Note that this is decoupled from the tick interval. If the cl_cmdrate setting is low, or the client's actual framerate is low, then it's possible that a single physical packet from the client to the server will contain multiple actual "usercmd" payloads. Conversely, if the cl_cmdrate is higher than 66.67, it's possible that some physical packets will be sent to the server without any actual "usercmds" in them. Furthermore, if the client sets a low "cl_rate" setting, then less physical packets could be sent to the server. The frequency of cl_cmdrate updates generally doesn't impact a player's ability to hit opponents, since lag compensation factors in the player's latency to the server and interpolation amount when checking whether shots would have hit opponents.

From the server side, the client's cl_updaterate setting determines how often the server will attempt to send a physical packet to the client. The basic formula is:

next packet time = current time + max( 1.0/cl_updaterate, bytes sent/rate setting )
Note.pngNote:"bytes sent" includes the UDP packet header overhead of 28 bytes.

In other words, if the player is requesting an updaterate of 20 packets per second, then the minimum time interval between physical packets is 50 milliseconds. However, if the player has a rate setting of 10000, and we just sent the player a 1000 byte packet, then the minimum time will be 1000/10000 or 100 milliseconds instead. If 1.0/cl_updaterate has elapsed and the server checks the "rate" part of the above equation and finds that it cannot yet send a packet, then the "choke" counter is incremented for the player. All this means is that the combination of rate, cl_updaterate, and physical packet size has forced the server to defer sending the player another packet. Thus, artificially setting cl_updaterate to a high number will usually increase "choke", especially in a busy scene, since the server is always trying to obey the user's specified rate setting. Choke is not necessarily a negative indicator, it could just mean that the cl_updaterate setting is too high.

The cl_updaterate, cl_interp_ratio, and cl_interp ConVars control interpolation (and lag compensation) in the following relationship. By default, Source games are tuned for an updaterate of 20 packets per second. This leads to an expected delta between packets of 50 msecs. Because packet loss can occur, the interpolator was tuned to allow for a single dropped packet without any hitch in the smoothness of motion perceived by the client. Thus, 100 milliseconds was chosen as the default for cl_interp (0.1 s = 2 x ( 1.0f / cl_updaterate default of 20 ) ). cl_interp_ratio defines the lower "bound" on what the actual interpolation amount used on the client. Specifically, the interpolation amount is:

min( max( cl_interp, cl_interp_ratio / cl_updaterate ), 0.25f )
Note.pngNote:Server operators can clamp the allowable cl_interp_ratio and cl_updaterate settings and the clamped values are factored in to the above calculations.

The Source netgraph now includes "lerp" indicator which shows the actual interpolation amount (usually 100 msec unless the server or user is forcing non-default settings). The indicator will turn yellow if the server's framerate (actual framerate on the remote machine) drops below this interval. This can be used to figure out why all of the objects in the world are no longer moving smoothly. In addition, the indicator will turn orange if the user or server has tuned the ConVars such that the interpolation amount is less than 2 / updaterate. This indicates that if there is any packet loss (or possibly choke if the choke is occurring for long periods of time due to large packets being sent over a low bandwidth rate setting) that the player will likely see sluggishness in the game.