Lines Matching defs:cwnd
954 "Do we set the cwnd too (if always_lower is on)");
974 "How to set cwnd at exit, 0 - dynamic, 1 - use min-rtt, 2 - use curgprtt, 3 - entry gp-rtt");
1041 "Blast out the full cwnd/rwnd when doing a PCM measurement");
1301 "Rack timely when setting the cwnd what is the min num segments");
1474 "Does a cwnd just-return end the measurement window (app limited)");
1610 "Should RACK try to use the shared cwnd on connections where allowed");
1615 "Should RACK place low end time limits on the shared cwnd feature");
1865 "Number of times that excessive rxt clamped the cwnd down");
1871 "Number of connections that have had excessive rxt clamped the cwnd down");
2524 /* For the configured rate we look at our cwnd vs the ssthresh */
3798 * that we are being limited (cwnd/rwnd/app) and can't
4214 * play with the cwnd and wait until we get down
4347 * If configured to, set the cwnd and ssthresh to
4377 * cwnd space for timely to work.
4415 /* If we set in the cwnd also set the ssthresh point so we are in CA */
5530 /* Which way our we limited, if not cwnd limited no advance in CA */
5591 * Only decrement the rc_out_at_rto if the cwnd advances
5612 * The cwnd has grown beyond ssthresh we have
5691 * Suck the next prr cnt back into cwnd, but
5696 * We are allowed to add back to the cwnd the amount we did
7792 * first retransmit; record ssthresh and cwnd so they can be
8940 * Note that being cwnd blocked is not applimited
9082 * cwnd state but we won't for now.
10639 * to be min(cwnd=1mss, 2mss). Which makes it basically
11145 * outstanding. If this occurs and we have room to send in our cwnd/rwnd
11565 * in the cwnd. We will only extend the fast path by
11909 * retransmit in the first place. Recover our original cwnd and
14844 * sure cwnd and ssthresh is correct.
14854 * cwnd and the rwnd (via dynamic rwnd growth). If
15665 * cwnd which may be inflated).
17260 * The cwnd is collapsed to
17270 * If we are in recover our cwnd needs to be less for
17299 * the "reduced_win" is returned as a slimmed down cwnd that
17439 * Ok fill_bw holds our mythical b/w to fill the cwnd
17470 * We use the most optimistic possible cwnd/srtt for
17476 uint64_t cwnd, tr_perms = 0;
17489 cwnd = rack->r_ctl.rc_rack_largest_cwnd;
17491 cwnd = rack->r_ctl.cwnd_to_use;
17492 /* Inflate cwnd by 1000 so srtt of usecs is in ms */
17493 tr_perms = (cwnd * 1000) / srtt;
17503 * cwnd. Which in that case we are just waiting for
17656 * fill the cwnd to the max if its not full.
18097 * We have in flight what we are allowed by cwnd (if
19076 * during pacing or to fill the cwnd and that was greater than
20322 /* We are doing cwnd sharing */
20342 /* First lets update and get the cwnd */
20604 * the cwnd is not so small that we could
20608 * of the socket buffer and the cwnd is blocking
20635 * room to send at least N pace_max_seg, the cwnd is greater
20645 * have cwnd space to have a bit more than a max pace segments in flight.
20974 /* Its the cwnd */
20991 * measurements we don't want any gap (even cwnd).
20999 * the cwnd (or prr). We have been configured
22656 /* We treat this like a full retransmit timeout without the cwnd adjustment */
23280 /* RACK TLP cwnd reduction (bool) */
24486 * to 50 for 50% i.e. the cwnd is reduced to 50% of its previous value
24509 * to 80 for 80% i.e. the cwnd is reduced by 20% of its previous value when
24642 /* RACK TLP cwnd reduction (bool) */