1<!doctype html public "-//W3C//DTD HTML 4.01 Transitional//EN" 2 "http://www.w3.org/TR/html4/loose.dtd"> 3 4<html> 5 6<head> 7 8<title>Postfix Bottleneck Analysis</title> 9 10<meta http-equiv="Content-Type" content="text/html; charset=us-ascii"> 11 12</head> 13 14<body> 15 16<h1><img src="postfix-logo.jpg" width="203" height="98" ALT="">Postfix Bottleneck Analysis</h1> 17 18<hr> 19 20<h2>Purpose of this document </h2> 21 22<p> This document is an introduction to Postfix queue congestion analysis. 23It explains how the qshape(1) program can help to track down the 24reason for queue congestion. qshape(1) is bundled with Postfix 252.1 and later source code, under the "auxiliary" directory. This 26document describes qshape(1) as bundled with Postfix 2.4. </p> 27 28<p> This document covers the following topics: </p> 29 30<ul> 31 32<li><a href="#qshape">Introducing the qshape tool</a> 33 34<li><a href="#trouble_shooting">Trouble shooting with qshape</a> 35 36<li><a href="#healthy">Example 1: Healthy queue</a> 37 38<li><a href="#dictionary_bounce">Example 2: Deferred queue full of 39dictionary attack bounces</a></li> 40 41<li><a href="#active_congestion">Example 3: Congestion in the active 42queue</a></li> 43 44<li><a href="#backlog">Example 4: High volume destination backlog</a> 45 46<li><a href="#queues">Postfix queue directories</a> 47 48<ul> 49 50<li> <a href="#maildrop_queue"> The "maildrop" queue </a> 51 52<li> <a href="#hold_queue"> The "hold" queue </a> 53 54<li> <a href="#incoming_queue"> The "incoming" queue </a> 55 56<li> <a href="#active_queue"> The "active" queue </a> 57 58<li> <a href="#deferred_queue"> The "deferred" queue </a> 59 60</ul> 61 62<li><a href="#credits">Credits</a> 63 64</ul> 65 66<h2><a name="qshape">Introducing the qshape tool</a></h2> 67 68<p> When mail is draining slowly or the queue is unexpectedly large, 69run qshape(1) as the super-user (root) to help zero in on the problem. 70The qshape(1) program displays a tabular view of the Postfix queue 71contents. </p> 72 73<ul> 74 75<li> <p> On the horizontal axis, it displays the queue age with 76fine granularity for recent messages and (geometrically) less fine 77granularity for older messages. </p> 78 79<li> <p> The vertical axis displays the destination (or with the 80"-s" switch the sender) domain. Domains with the most messages are 81listed first. </p> 82 83</ul> 84 85<p> For example, in the output below we see the top 10 lines of 86the (mostly forged) sender domain distribution for captured spam 87in the "hold" queue: </p> 88 89<blockquote> 90<pre> 91$ qshape -s hold | head 92 T 5 10 20 40 80 160 320 640 1280 1280+ 93 TOTAL 486 0 0 1 0 0 2 4 20 40 419 94 yahoo.com 14 0 0 1 0 0 0 0 1 0 12 95 extremepricecuts.net 13 0 0 0 0 0 0 0 2 0 11 96 ms35.hinet.net 12 0 0 0 0 0 0 0 0 1 11 97 winnersdaily.net 12 0 0 0 0 0 0 0 2 0 10 98 hotmail.com 11 0 0 0 0 0 0 0 0 1 10 99 worldnet.fr 6 0 0 0 0 0 0 0 0 0 6 100 ms41.hinet.net 6 0 0 0 0 0 0 0 0 0 6 101 osn.de 5 0 0 0 0 0 1 0 0 0 4 102</pre> 103</blockquote> 104 105<ul> 106 107<li> <p> The "T" column shows the total (in this case sender) count 108for each domain. The columns with numbers above them, show counts 109for messages aged fewer than that many minutes, but not younger 110than the age limit for the previous column. The row labeled "TOTAL" 111shows the total count for all domains. </p> 112 113<li> <p> In this example, there are 14 messages allegedly from 114yahoo.com, 1 between 10 and 20 minutes old, 1 between 320 and 640 115minutes old and 12 older than 1280 minutes (1440 minutes in a day). 116</p> 117 118</ul> 119 120<p> When the output is a terminal intermediate results showing the top 20 121domains (-n option) are displayed after every 1000 messages (-N option) 122and the final output also shows only the top 20 domains. This makes 123qshape useful even when the deferred queue is very large and it may 124otherwise take prohibitively long to read the entire deferred queue. </p> 125 126<p> By default, qshape shows statistics for the union of both the 127incoming and active queues which are the most relevant queues to 128look at when analyzing performance. </p> 129 130<p> One can request an alternate list of queues: </p> 131 132<blockquote> 133<pre> 134$ qshape deferred 135$ qshape incoming active deferred 136</pre> 137</blockquote> 138 139<p> this will show the age distribution of the deferred queue or 140the union of the incoming active and deferred queues. </p> 141 142<p> Command line options control the number of display "buckets", 143the age limit for the smallest bucket, display of parent domain 144counts and so on. The "-h" option outputs a summary of the available 145switches. </p> 146 147<h2><a name="trouble_shooting">Trouble shooting with qshape</a> 148</h2> 149 150<p> Large numbers in the qshape output represent a large number of 151messages that are destined to (or alleged to come from) a particular 152domain. It should be possible to tell at a glance which domains 153dominate the queue sender or recipient counts, approximately when 154a burst of mail started, and when it stopped. </p> 155 156<p> The problem destinations or sender domains appear near the top 157left corner of the output table. Remember that the active queue 158can accommodate up to 20000 ($qmgr_message_active_limit) messages. 159To check whether this limit has been reached, use: </p> 160 161<blockquote> 162<pre> 163$ qshape -s active <i>(show sender statistics)</i> 164</pre> 165</blockquote> 166 167<p> If the total sender count is below 20000 the active queue is 168not yet saturated, any high volume sender domains show near the 169top of the output. 170 171<p> With oqmgr(8) the active queue is also limited to at most 20000 172recipient addresses ($qmgr_message_recipient_limit). To check for 173exhaustion of this limit use: </p> 174 175<blockquote> 176<pre> 177$ qshape active <i>(show recipient statistics)</i> 178</pre> 179</blockquote> 180 181<p> Having found the high volume domains, it is often useful to 182search the logs for recent messages pertaining to the domains in 183question. </p> 184 185<blockquote> 186<pre> 187# Find deliveries to example.com 188# 189$ tail -10000 /var/log/maillog | 190 egrep -i ': to=<.*@example\.com>,' | 191 less 192 193# Find messages from example.com 194# 195$ tail -10000 /var/log/maillog | 196 egrep -i ': from=<.*@example\.com>,' | 197 less 198</pre> 199</blockquote> 200 201<p> You may want to drill in on some specific queue ids: </p> 202 203<blockquote> 204<pre> 205# Find all messages for a specific queue id. 206# 207$ tail -10000 /var/log/maillog | egrep ': 2B2173FF68: ' 208</pre> 209</blockquote> 210 211<p> Also look for queue manager warning messages in the log. These 212warnings can suggest strategies to reduce congestion. </p> 213 214<blockquote> 215<pre> 216$ egrep 'qmgr.*(panic|fatal|error|warning):' /var/log/maillog 217</pre> 218</blockquote> 219 220<p> When all else fails try the Postfix mailing list for help, but 221please don't forget to include the top 10 or 20 lines of qshape(1) 222output. </p> 223 224<h2><a name="healthy">Example 1: Healthy queue</a></h2> 225 226<p> When looking at just the incoming and active queues, under 227normal conditions (no congestion) the incoming and active queues 228are nearly empty. Mail leaves the system almost as quickly as it 229comes in or is deferred without congestion in the active queue. 230</p> 231 232<blockquote> 233<pre> 234$ qshape <i>(show incoming and active queue status)</i> 235 236 T 5 10 20 40 80 160 320 640 1280 1280+ 237 TOTAL 5 0 0 0 1 0 0 0 1 1 2 238 meri.uwasa.fi 5 0 0 0 1 0 0 0 1 1 2 239</pre> 240</blockquote> 241 242<p> If one looks at the two queues separately, the incoming queue 243is empty or perhaps briefly has one or two messages, while the 244active queue holds more messages and for a somewhat longer time: 245</p> 246 247<blockquote> 248<pre> 249$ qshape incoming 250 251 T 5 10 20 40 80 160 320 640 1280 1280+ 252 TOTAL 0 0 0 0 0 0 0 0 0 0 0 253 254$ qshape active 255 256 T 5 10 20 40 80 160 320 640 1280 1280+ 257 TOTAL 5 0 0 0 1 0 0 0 1 1 2 258 meri.uwasa.fi 5 0 0 0 1 0 0 0 1 1 2 259</pre> 260</blockquote> 261 262<h2><a name="dictionary_bounce">Example 2: Deferred queue full of 263dictionary attack bounces</a></h2> 264 265<p> This is from a server where recipient validation is not yet 266available for some of the hosted domains. Dictionary attacks on 267the unvalidated domains result in bounce backscatter. The bounces 268dominate the queue, but with proper tuning they do not saturate the 269incoming or active queues. The high volume of deferred mail is not 270a direct cause for alarm. </p> 271 272<blockquote> 273<pre> 274$ qshape deferred | head 275 276 T 5 10 20 40 80 160 320 640 1280 1280+ 277 TOTAL 2234 4 2 5 9 31 57 108 201 464 1353 278 heyhihellothere.com 207 0 0 1 1 6 6 8 25 68 92 279 pleazerzoneprod.com 105 0 0 0 0 0 0 0 5 44 56 280 groups.msn.com 63 2 1 2 4 4 14 14 14 8 0 281 orion.toppoint.de 49 0 0 0 1 0 2 4 3 16 23 282 kali.com.cn 46 0 0 0 0 1 0 2 6 12 25 283 meri.uwasa.fi 44 0 0 0 0 1 0 2 8 11 22 284 gjr.paknet.com.pk 43 1 0 0 1 1 3 3 6 12 16 285 aristotle.algonet.se 41 0 0 0 0 0 1 2 11 12 15 286</pre> 287</blockquote> 288 289<p> The domains shown are mostly bulk-mailers and all the volume 290is the tail end of the time distribution, showing that short term 291arrival rates are moderate. Larger numbers and lower message ages 292are more indicative of current trouble. Old mail still going nowhere 293is largely harmless so long as the active and incoming queues are 294short. We can also see that the groups.msn.com undeliverables are 295low rate steady stream rather than a concentrated dictionary attack 296that is now over. </p> 297 298<blockquote> 299<pre> 300$ qshape -s deferred | head 301 302 T 5 10 20 40 80 160 320 640 1280 1280+ 303 TOTAL 2193 4 4 5 8 33 56 104 205 465 1309 304 MAILER-DAEMON 1709 4 4 5 8 33 55 101 198 452 849 305 example.com 263 0 0 0 0 0 0 0 0 2 261 306 example.org 209 0 0 0 0 0 1 3 6 11 188 307 example.net 6 0 0 0 0 0 0 0 0 0 6 308 example.edu 3 0 0 0 0 0 0 0 0 0 3 309 example.gov 2 0 0 0 0 0 0 0 1 0 1 310 example.mil 1 0 0 0 0 0 0 0 0 0 1 311</pre> 312</blockquote> 313 314<p> Looking at the sender distribution, we see that as expected 315most of the messages are bounces. </p> 316 317<h2><a name="active_congestion">Example 3: Congestion in the active 318queue</a></h2> 319 320<p> This example is taken from a Feb 2004 discussion on the Postfix 321Users list. Congestion was reported with the active and incoming 322queues large and not shrinking despite very large delivery agent 323process limits. The thread is archived at: 324http://groups.google.com/groups?th=636626c645f5bbde </p> 325 326<p> Using an older version of qshape(1) it was quickly determined 327that all the messages were for just a few destinations: </p> 328 329<blockquote> 330<pre> 331$ qshape <i>(show incoming and active queue status)</i> 332 333 T A 5 10 20 40 80 160 320 320+ 334 TOTAL 11775 9996 0 0 1 1 42 94 221 1420 335 user.sourceforge.net 7678 7678 0 0 0 0 0 0 0 0 336 lists.sourceforge.net 2313 2313 0 0 0 0 0 0 0 0 337 gzd.gotdns.com 102 0 0 0 0 0 0 0 2 100 338</pre> 339</blockquote> 340 341<p> The "A" column showed the count of messages in the active queue, 342and the numbered columns showed totals for the deferred queue. At 34310000 messages (Postfix 1.x active queue size limit) the active 344queue is full. The incoming was growing rapidly. </p> 345 346<p> With the trouble destinations clearly identified, the administrator 347quickly found and fixed the problem. It is substantially harder to 348glean the same information from the logs. While a careful reading 349of mailq(1) output should yield similar results, it is much harder 350to gauge the magnitude of the problem by looking at the queue 351one message at a time. </p> 352 353<h2><a name="backlog">Example 4: High volume destination backlog</a></h2> 354 355<p> When a site you send a lot of email to is down or slow, mail 356messages will rapidly build up in the deferred queue, or worse, in 357the active queue. The qshape output will show large numbers for 358the destination domain in all age buckets that overlap the starting 359time of the problem: </p> 360 361<blockquote> 362<pre> 363$ qshape deferred | head 364 365 T 5 10 20 40 80 160 320 640 1280 1280+ 366 TOTAL 5000 200 200 400 800 1600 1000 200 200 200 200 367 highvolume.com 4000 160 160 320 640 1280 1440 0 0 0 0 368 ... 369</pre> 370</blockquote> 371 372<p> Here the "highvolume.com" destination is continuing to accumulate 373deferred mail. The incoming and active queues are fine, but the 374deferred queue started growing some time between 1 and 2 hours ago 375and continues to grow. </p> 376 377<p> If the high volume destination is not down, but is instead 378slow, one might see similar congestion in the active queue. Active 379queue congestion is a greater cause for alarm; one might need to 380take measures to ensure that the mail is deferred instead or even 381add an access(5) rule asking the sender to try again later. </p> 382 383<p> If a high volume destination exhibits frequent bursts of consecutive 384connections refused by all MX hosts or "421 Server busy errors", it 385is possible for the queue manager to mark the destination as "dead" 386despite the transient nature of the errors. The destination will be 387retried again after the expiration of a $minimal_backoff_time timer. 388If the error bursts are frequent enough it may be that only a small 389quantity of email is delivered before the destination is again marked 390"dead". In some cases enabling static (not on demand) connection 391caching by listing the appropriate nexthop domain in a table included in 392"smtp_connection_cache_destinations" may help to reduce the error rate, 393because most messages will re-use existing connections. </p> 394 395<p> The MTA that has been observed most frequently to exhibit such 396bursts of errors is Microsoft Exchange, which refuses connections 397under load. Some proxy virus scanners in front of the Exchange 398server propagate the refused connection to the client as a "421" 399error. </p> 400 401<p> Note that it is now possible to configure Postfix to exhibit similarly 402erratic behavior by misconfiguring the anvil(8) service. Do not use 403anvil(8) for steady-state rate limiting, its purpose is (unintentional) 404DoS prevention and the rate limits set should be very generous! </p> 405 406<p> If one finds oneself needing to deliver a high volume of mail to a 407destination that exhibits frequent brief bursts of errors and connection 408caching does not solve the problem, there is a subtle workaround. </p> 409 410<ul> 411 412<li> <p> Postfix version 2.5 and later: </p> 413 414<ul> 415 416<li> <p> In master.cf set up a dedicated clone of the "smtp" transport 417for the destination in question. In the example below we will call 418it "fragile". </p> 419 420<li> <p> In master.cf configure a reasonable process limit for the 421cloned smtp transport (a number in the 10-20 range is typical). </p> 422 423<li> <p> IMPORTANT!!! In main.cf configure a large per-destination 424pseudo-cohort failure limit for the cloned smtp transport. </p> 425 426<pre> 427/etc/postfix/main.cf: 428 transport_maps = hash:/etc/postfix/transport 429 fragile_destination_concurrency_failed_cohort_limit = 100 430 fragile_destination_concurrency_limit = 20 431 432/etc/postfix/transport: 433 example.com fragile: 434 435/etc/postfix/master.cf: 436 # service type private unpriv chroot wakeup maxproc command 437 fragile unix - - n - 20 smtp 438</pre> 439 440<p> See also the documentation for 441default_destination_concurrency_failed_cohort_limit and 442default_destination_concurrency_limit. </p> 443 444</ul> 445 446<li> <p> Earlier Postfix versions: </p> 447 448<ul> 449 450<li> <p> In master.cf set up a dedicated clone of the "smtp" 451transport for the destination in question. In the example below 452we will call it "fragile". </p> 453 454<li> <p> In master.cf configure a reasonable process limit for the 455transport (a number in the 10-20 range is typical). </p> 456 457<li> <p> IMPORTANT!!! In main.cf configure a very large initial 458and destination concurrency limit for this transport (say 2000). </p> 459 460<pre> 461/etc/postfix/main.cf: 462 transport_maps = hash:/etc/postfix/transport 463 initial_destination_concurrency = 2000 464 fragile_destination_concurrency_limit = 2000 465 466/etc/postfix/transport: 467 example.com fragile: 468 469/etc/postfix/master.cf: 470 # service type private unpriv chroot wakeup maxproc command 471 fragile unix - - n - 20 smtp 472</pre> 473 474<p> See also the documentation for default_destination_concurrency_limit. 475</p> 476 477</ul> 478 479</ul> 480 481<p> The effect of this configuration is that up to 2000 482consecutive errors are tolerated without marking the destination 483dead, while the total concurrency remains reasonable (10-20 484processes). This trick is only for a very specialized situation: 485high volume delivery into a channel with multi-error bursts 486that is capable of high throughput, but is repeatedly throttled by 487the bursts of errors. </p> 488 489<p> When a destination is unable to handle the load even after the 490Postfix process limit is reduced to 1, a desperate measure is to 491insert brief delays between delivery attempts. </p> 492 493<ul> 494 495<li> <p> Postfix version 2.5 and later: </p> 496 497<ul> 498 499<li> <p> In master.cf set up a dedicated clone of the "smtp" transport 500for the problem destination. In the example below we call it "slow". 501</p> 502 503<li> <p> In main.cf configure a short delay between deliveries to 504the same destination. </p> 505 506<pre> 507/etc/postfix/main.cf: 508 transport_maps = hash:/etc/postfix/transport 509 slow_destination_rate_delay = 1 510 511/etc/postfix/transport: 512 example.com slow: 513 514/etc/postfix/master.cf: 515 # service type private unpriv chroot wakeup maxproc command 516 slow unix - - n - - smtp 517</pre> 518 519</ul> 520 521<p> See also the documentation for default_destination_rate_delay. </p> 522 523<p> This solution forces the Postfix smtp(8) client to wait for 524$slow_destination_rate_delay seconds between deliveries to the same 525destination. </p> 526 527<li> <p> Earlier Postfix versions: </p> 528 529<ul> 530 531<li> <p> In the transport map entry for the problem destination, 532specify a dead host as the primary nexthop. </p> 533 534<li> <p> In the master.cf entry for the transport specify the 535problem destination as the fallback_relay and specify a small 536smtp_connect_timeout value. </p> 537 538<pre> 539/etc/postfix/main.cf: 540 transport_maps = hash:/etc/postfix/transport 541 542/etc/postfix/transport: 543 example.com slow:[dead.host] 544 545/etc/postfix/master.cf: 546 # service type private unpriv chroot wakeup maxproc command 547 slow unix - - n - 1 smtp 548 -o fallback_relay=problem.example.com 549 -o smtp_connect_timeout=1 550 -o smtp_connection_cache_on_demand=no 551</pre> 552 553</ul> 554 555<p> This solution forces the Postfix smtp(8) client to wait for 556$smtp_connect_timeout seconds between deliveries. The connection 557caching feature is disabled to prevent the client from skipping 558over the dead host. </p> 559 560</ul> 561 562<h2><a name="queues">Postfix queue directories</a></h2> 563 564<p> The following sections describe Postfix queues: their purpose, 565what normal behavior looks like, and how to diagnose abnormal 566behavior. </p> 567 568<h3> <a name="maildrop_queue"> The "maildrop" queue </a> </h3> 569 570<p> Messages that have been submitted via the Postfix sendmail(1) 571command, but not yet brought into the main Postfix queue by the 572pickup(8) service, await processing in the "maildrop" queue. Messages 573can be added to the "maildrop" queue even when the Postfix system 574is not running. They will begin to be processed once Postfix is 575started. </p> 576 577<p> The "maildrop" queue is drained by the single threaded pickup(8) 578service scanning the queue directory periodically or when notified 579of new message arrival by the postdrop(1) program. The postdrop(1) 580program is a setgid helper that allows the unprivileged Postfix 581sendmail(1) program to inject mail into the "maildrop" queue and 582to notify the pickup(8) service of its arrival. </p> 583 584<p> All mail that enters the main Postfix queue does so via the 585cleanup(8) service. The cleanup service is responsible for envelope 586and header rewriting, header and body regular expression checks, 587automatic bcc recipient processing, milter content processing, and 588reliable insertion of the message into the Postfix "incoming" queue. </p> 589 590<p> In the absence of excessive CPU consumption in cleanup(8) header 591or body regular expression checks or other software consuming all 592available CPU resources, Postfix performance is disk I/O bound. 593The rate at which the pickup(8) service can inject messages into 594the queue is largely determined by disk access times, since the 595cleanup(8) service must commit the message to stable storage before 596returning success. The same is true of the postdrop(1) program 597writing the message to the "maildrop" directory. </p> 598 599<p> As the pickup service is single threaded, it can only deliver 600one message at a time at a rate that does not exceed the reciprocal 601disk I/O latency (+ CPU if not negligible) of the cleanup service. 602</p> 603 604<p> Congestion in this queue is indicative of an excessive local message 605submission rate or perhaps excessive CPU consumption in the cleanup(8) 606service due to excessive body_checks, or (Postfix ≥ 2.3) high latency 607milters. </p> 608 609<p> Note, that once the active queue is full, the cleanup service 610will attempt to slow down message injection by pausing $in_flow_delay 611for each message. In this case "maildrop" queue congestion may be 612a consequence of congestion downstream, rather than a problem in 613its own right. </p> 614 615<p> Note, you should not attempt to deliver large volumes of mail via 616the pickup(8) service. High volume sites should avoid using "simple" 617content filters that re-inject scanned mail via Postfix sendmail(1) 618and postdrop(1). </p> 619 620<p> A high arrival rate of locally submitted mail may be an indication 621of an uncaught forwarding loop, or a run-away notification program. 622Try to keep the volume of local mail injection to a moderate level. 623</p> 624 625<p> The "postsuper -r" command can place selected messages into 626the "maildrop" queue for reprocessing. This is most useful for 627resetting any stale content_filter settings. Requeuing a large number 628of messages using "postsuper -r" can clearly cause a spike in the 629size of the "maildrop" queue. </p> 630 631<h3> <a name="hold_queue"> The "hold" queue </a> </h3> 632 633<p> The administrator can define "smtpd" access(5) policies, or 634cleanup(8) header/body checks that cause messages to be automatically 635diverted from normal processing and placed indefinitely in the 636"hold" queue. Messages placed in the "hold" queue stay there until 637the administrator intervenes. No periodic delivery attempts are 638made for messages in the "hold" queue. The postsuper(1) command 639can be used to manually release messages into the "deferred" queue. 640</p> 641 642<p> Messages can potentially stay in the "hold" queue longer than 643$maximal_queue_lifetime. If such "old" messages need to be released from 644the "hold" queue, they should typically be moved into the "maildrop" 645queue using "postsuper -r", so that the message gets a new timestamp and 646is given more than one opportunity to be delivered. Messages that are 647"young" can be moved directly into the "deferred" queue using 648"postsuper -H". </p> 649 650<p> The "hold" queue plays little role in Postfix performance, and 651monitoring of the "hold" queue is typically more closely motivated 652by tracking spam and malware, than by performance issues. </p> 653 654<h3> <a name="incoming_queue"> The "incoming" queue </a> </h3> 655 656<p> All new mail entering the Postfix queue is written by the 657cleanup(8) service into the "incoming" queue. New queue files are 658created owned by the "postfix" user with an access bitmask (or 659mode) of 0600. Once a queue file is ready for further processing 660the cleanup(8) service changes the queue file mode to 0700 and 661notifies the queue manager of new mail arrival. The queue manager 662ignores incomplete queue files whose mode is 0600, as these are 663still being written by cleanup. </p> 664 665<p> The queue manager scans the incoming queue bringing any new 666mail into the "active" queue if the active queue resource limits 667have not been exceeded. By default, the active queue accommodates 668at most 20000 messages. Once the active queue message limit is 669reached, the queue manager stops scanning the incoming (and deferred, 670see below) queue. </p> 671 672<p> Under normal conditions the incoming queue is nearly empty (has 673only mode 0600 files), with the queue manager able to import new 674messages into the active queue as soon as they become available. 675</p> 676 677<p> The incoming queue grows when the message input rate spikes 678above the rate at which the queue manager can import messages into 679the active queue. The main factors slowing down the queue manager 680are disk I/O and lookup queries to the trivial-rewrite service. If the queue 681manager is routinely not keeping up, consider not using "slow" 682lookup services (MySQL, LDAP, ...) for transport lookups or speeding 683up the hosts that provide the lookup service. If the problem is I/O 684starvation, consider striping the queue over more disks, faster controllers 685with a battery write cache, or other hardware improvements. At the very 686least, make sure that the queue directory is mounted with the "noatime" 687option if applicable to the underlying filesystem. </p> 688 689<p> The in_flow_delay parameter is used to clamp the input rate 690when the queue manager starts to fall behind. The cleanup(8) service 691will pause for $in_flow_delay seconds before creating a new queue 692file if it cannot obtain a "token" from the queue manager. </p> 693 694<p> Since the number of cleanup(8) processes is limited in most 695cases by the SMTP server concurrency, the input rate can exceed 696the output rate by at most "SMTP connection count" / $in_flow_delay 697messages per second. </p> 698 699<p> With a default process limit of 100, and an in_flow_delay of 7001s, the coupling is strong enough to limit a single run-away injector 701to 1 message per second, but is not strong enough to deflect an 702excessive input rate from many sources at the same time. </p> 703 704<p> If a server is being hammered from multiple directions, consider 705raising the in_flow_delay to 10 seconds, but only if the incoming 706queue is growing even while the active queue is not full and the 707trivial-rewrite service is using a fast transport lookup mechanism. 708</p> 709 710<h3> <a name="active_queue"> The "active" queue </a> </h3> 711 712<p> The queue manager is a delivery agent scheduler; it works to 713ensure fast and fair delivery of mail to all destinations within 714designated resource limits. </p> 715 716<p> The active queue is somewhat analogous to an operating system's 717process run queue. Messages in the active queue are ready to be 718sent (runnable), but are not necessarily in the process of being 719sent (running). </p> 720 721<p> While most Postfix administrators think of the "active" queue 722as a directory on disk, the real "active" queue is a set of data 723structures in the memory of the queue manager process. </p> 724 725<p> Messages in the "maildrop", "hold", "incoming" and "deferred" 726queues (see below) do not occupy memory; they are safely stored on 727disk waiting for their turn to be processed. The envelope information 728for messages in the "active" queue is managed in memory, allowing 729the queue manager to do global scheduling, allocating available 730delivery agent processes to an appropriate message in the active 731queue. </p> 732 733<p> Within the active queue, (multi-recipient) messages are broken 734up into groups of recipients that share the same transport/nexthop 735combination; the group size is capped by the transport's recipient 736concurrency limit. </p> 737 738<p> Multiple recipient groups (from one or more messages) are queued 739for delivery grouped by transport/nexthop combination. The 740<b>destination</b> concurrency limit for the transports caps the number 741of simultaneous delivery attempts for each nexthop. Transports with 742a <b>recipient</b> concurrency limit of 1 are special: these are grouped 743by the actual recipient address rather than the nexthop, yielding 744per-recipient concurrency limits rather than per-domain 745concurrency limits. Per-recipient limits are appropriate when 746performing final delivery to mailboxes rather than when relaying 747to a remote server. </p> 748 749<p> Congestion occurs in the active queue when one or more destinations 750drain slower than the corresponding message input rate. </p> 751 752<p> Input into the active queue comes both from new mail in the "incoming" 753queue, and retries of mail in the "deferred" queue. Should the "deferred" 754queue get really large, retries of old mail can dominate the arrival 755rate of new mail. Systems with more CPU, faster disks and more network 756bandwidth can deal with larger deferred queues, but as a rule of thumb 757the deferred queue scales to somewhere between 100,000 and 1,000,000 758messages with good performance unlikely above that "limit". Systems with 759queues this large should typically stop accepting new mail, or put the 760backlog "on hold" until the underlying issue is fixed (provided that 761there is enough capacity to handle just the new mail). </p> 762 763<p> When a destination is down for some time, the queue manager will 764mark it dead, and immediately defer all mail for the destination without 765trying to assign it to a delivery agent. In this case the messages 766will quickly leave the active queue and end up in the deferred queue 767(with Postfix < 2.4, this is done directly by the queue manager, 768with Postfix ≥ 2.4 this is done via the "retry" delivery agent). </p> 769 770<p> When the destination is instead simply slow, or there is a problem 771causing an excessive arrival rate the active queue will grow and will 772become dominated by mail to the congested destination. </p> 773 774<p> The only way to reduce congestion is to either reduce the input 775rate or increase the throughput. Increasing the throughput requires 776either increasing the concurrency or reducing the latency of 777deliveries. </p> 778 779<p> For high volume sites a key tuning parameter is the number of 780"smtp" delivery agents allocated to the "smtp" and "relay" transports. 781High volume sites tend to send to many different destinations, many 782of which may be down or slow, so a good fraction of the available 783delivery agents will be blocked waiting for slow sites. Also mail 784destined across the globe will incur large SMTP command-response 785latencies, so high message throughput can only be achieved with 786more concurrent delivery agents. </p> 787 788<p> The default "smtp" process limit of 100 is good enough for most 789sites, and may even need to be lowered for sites with low bandwidth 790connections (no use increasing concurrency once the network pipe 791is full). When one finds that the queue is growing on an "idle" 792system (CPU, disk I/O and network not exhausted) the remaining 793reason for congestion is insufficient concurrency in the face of 794a high average latency. If the number of outbound SMTP connections 795(either ESTABLISHED or SYN_SENT) reaches the process limit, mail 796is draining slowly and the system and network are not loaded, raise 797the "smtp" and/or "relay" process limits! </p> 798 799<p> When a high volume destination is served by multiple MX hosts with 800typically low delivery latency, performance can suffer dramatically when 801one of the MX hosts is unresponsive and SMTP connections to that host 802timeout. For example, if there are 2 equal weight MX hosts, the SMTP 803connection timeout is 30 seconds and one of the MX hosts is down, the 804average SMTP connection will take approximately 15 seconds to complete. 805With a default per-destination concurrency limit of 20 connections, 806throughput falls to just over 1 message per second. </p> 807 808<p> The best way to avoid bottlenecks when one or more MX hosts is 809non-responsive is to use connection caching. Connection caching was 810introduced with Postfix 2.2 and is by default enabled on demand for 811destinations with a backlog of mail in the active queue. When connection 812caching is in effect for a particular destination, established connections 813are re-used to send additional messages, this reduces the number of 814connections made per message delivery and maintains good throughput even 815in the face of partial unavailability of the destination's MX hosts. </p> 816 817<p> If connection caching is not available (Postfix < 2.2) or does 818not provide a sufficient latency reduction, especially for the "relay" 819transport used to forward mail to "your own" domains, consider setting 820lower than default SMTP connection timeouts (1-5 seconds) and higher 821than default destination concurrency limits. This will further reduce 822latency and provide more concurrency to maintain throughput should 823latency rise. </p> 824 825<p> Setting high concurrency limits to domains that are not your own may 826be viewed as hostile by the receiving system, and steps may be taken 827to prevent you from monopolizing the destination system's resources. 828The defensive measures may substantially reduce your throughput or block 829access entirely. Do not set aggressive concurrency limits to remote 830domains without coordinating with the administrators of the target 831domain. </p> 832 833<p> If necessary, dedicate and tune custom transports for selected high 834volume destinations. The "relay" transport is provided for forwarding mail 835to domains for which your server is a primary or backup MX host. These can 836make up a substantial fraction of your email traffic. Use the "relay" and 837not the "smtp" transport to send email to these domains. Using the "relay" 838transport allocates a separate delivery agent pool to these destinations 839and allows separate tuning of timeouts and concurrency limits. </p> 840 841<p> Another common cause of congestion is unwarranted flushing of the 842entire deferred queue. The deferred queue holds messages that are likely 843to fail to be delivered and are also likely to be slow to fail delivery 844(time out). As a result the most common reaction to a large deferred queue 845(flush it!) is more than likely counter-productive, and typically makes 846the congestion worse. Do not flush the deferred queue unless you expect 847that most of its content has recently become deliverable (e.g. relayhost 848back up after an outage)! </p> 849 850<p> Note that whenever the queue manager is restarted, there may 851already be messages in the active queue directory, but the "real" 852active queue in memory is empty. In order to recover the in-memory 853state, the queue manager moves all the active queue messages 854back into the incoming queue, and then uses its normal incoming 855queue scan to refill the active queue. The process of moving all 856the messages back and forth, redoing transport table (trivial-rewrite(8) 857resolve service) lookups, and re-importing the messages back into 858memory is expensive. At all costs, avoid frequent restarts of the 859queue manager (e.g. via frequent execution of "postfix reload"). </p> 860 861<h3> <a name="deferred_queue"> The "deferred" queue </a> </h3> 862 863<p> When all the deliverable recipients for a message are delivered, 864and for some recipients delivery failed for a transient reason (it 865might succeed later), the message is placed in the deferred queue. 866</p> 867 868<p> The queue manager scans the deferred queue periodically. The scan 869interval is controlled by the queue_run_delay parameter. While a deferred 870queue scan is in progress, if an incoming queue scan is also in progress 871(ideally these are brief since the incoming queue should be short), the 872queue manager alternates between looking for messages in the "incoming" 873queue and in the "deferred" queue. This "round-robin" strategy prevents 874starvation of either the incoming or the deferred queues. </p> 875 876<p> Each deferred queue scan only brings a fraction of the deferred 877queue back into the active queue for a retry. This is because each 878message in the deferred queue is assigned a "cool-off" time when 879it is deferred. This is done by time-warping the modification 880time of the queue file into the future. The queue file is not 881eligible for a retry if its modification time is not yet reached. 882</p> 883 884<p> The "cool-off" time is at least $minimal_backoff_time and at 885most $maximal_backoff_time. The next retry time is set by doubling 886the message's age in the queue, and adjusting up or down to lie 887within the limits. This means that young messages are initially 888retried more often than old messages. </p> 889 890<p> If a high volume site routinely has large deferred queues, it 891may be useful to adjust the queue_run_delay, minimal_backoff_time and 892maximal_backoff_time to provide short enough delays on first failure 893(Postfix ≥ 2.4 has a sensibly low minimal backoff time by default), 894with perhaps longer delays after multiple failures, to reduce the 895retransmission rate of old messages and thereby reduce the quantity 896of previously deferred mail in the active queue. If you want a really 897low minimal_backoff_time, you may also want to lower queue_run_delay, 898but understand that more frequent scans will increase the demand for 899disk I/O. </p> 900 901<p> One common cause of large deferred queues is failure to validate 902recipients at the SMTP input stage. Since spammers routinely launch 903dictionary attacks from unrepliable sender addresses, the bounces 904for invalid recipient addresses clog the deferred queue (and at high 905volumes proportionally clog the active queue). Recipient validation 906is strongly recommended through use of the local_recipient_maps and 907relay_recipient_maps parameters. Even when bounces drain quickly they 908inundate innocent victims of forgery with unwanted email. To avoid 909this, do not accept mail for invalid recipients. </p> 910 911<p> When a host with lots of deferred mail is down for some time, 912it is possible for the entire deferred queue to reach its retry 913time simultaneously. This can lead to a very full active queue once 914the host comes back up. The phenomenon can repeat approximately 915every maximal_backoff_time seconds if the messages are again deferred 916after a brief burst of congestion. Perhaps, a future Postfix release 917will add a random offset to the retry time (or use a combination 918of strategies) to reduce the odds of repeated complete deferred 919queue flushes. </p> 920 921<h2><a name="credits">Credits</a></h2> 922 923<p> The qshape(1) program was developed by Victor Duchovni of Morgan 924Stanley, who also wrote the initial version of this document. </p> 925 926</body> 927 928</html> 929