1; $NetBSD: milli.S,v 1.3 2022/06/13 16:00:05 skrll Exp $ 2; 3; $OpenBSD: milli.S,v 1.5 2001/03/29 04:08:20 mickey Exp $ 4; 5; (c) Copyright 1986 HEWLETT-PACKARD COMPANY 6; 7; To anyone who acknowledges that this file is provided "AS IS" 8; without any express or implied warranty: 9; permission to use, copy, modify, and distribute this file 10; for any purpose is hereby granted without fee, provided that 11; the above copyright notice and this notice appears in all 12; copies, and that the name of Hewlett-Packard Company not be 13; used in advertising or publicity pertaining to distribution 14; of the software without specific, written prior permission. 15; Hewlett-Packard Company makes no representations about the 16; suitability of this software for any purpose. 17; 18 19; Standard Hardware Register Definitions for Use with Assembler 20; version A.08.06 21; - fr16-31 added at Utah 22;~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 23; Hardware General Registers 24r0: .equ 0 25 26r1: .equ 1 27 28r2: .equ 2 29 30r3: .equ 3 31 32r4: .equ 4 33 34r5: .equ 5 35 36r6: .equ 6 37 38r7: .equ 7 39 40r8: .equ 8 41 42r9: .equ 9 43 44r10: .equ 10 45 46r11: .equ 11 47 48r12: .equ 12 49 50r13: .equ 13 51 52r14: .equ 14 53 54r15: .equ 15 55 56r16: .equ 16 57 58r17: .equ 17 59 60r18: .equ 18 61 62r19: .equ 19 63 64r20: .equ 20 65 66r21: .equ 21 67 68r22: .equ 22 69 70r23: .equ 23 71 72r24: .equ 24 73 74r25: .equ 25 75 76r26: .equ 26 77 78r27: .equ 27 79 80r28: .equ 28 81 82r29: .equ 29 83 84r30: .equ 30 85 86r31: .equ 31 87 88; Hardware Space Registers 89sr0: .equ 0 90 91sr1: .equ 1 92 93sr2: .equ 2 94 95sr3: .equ 3 96 97sr4: .equ 4 98 99sr5: .equ 5 100 101sr6: .equ 6 102 103sr7: .equ 7 104 105; Hardware Floating Point Registers 106fr0: .equ 0 107 108fr1: .equ 1 109 110fr2: .equ 2 111 112fr3: .equ 3 113 114fr4: .equ 4 115 116fr5: .equ 5 117 118fr6: .equ 6 119 120fr7: .equ 7 121 122fr8: .equ 8 123 124fr9: .equ 9 125 126fr10: .equ 10 127 128fr11: .equ 11 129 130fr12: .equ 12 131 132fr13: .equ 13 133 134fr14: .equ 14 135 136fr15: .equ 15 137 138fr16: .equ 16 139 140fr17: .equ 17 141 142fr18: .equ 18 143 144fr19: .equ 19 145 146fr20: .equ 20 147 148fr21: .equ 21 149 150fr22: .equ 22 151 152fr23: .equ 23 153 154fr24: .equ 24 155 156fr25: .equ 25 157 158fr26: .equ 26 159 160fr27: .equ 27 161 162fr28: .equ 28 163 164fr29: .equ 29 165 166fr30: .equ 30 167 168fr31: .equ 31 169 170; Hardware Control Registers 171cr0: .equ 0 172 173rctr: .equ 0 ; Recovery Counter Register 174 175cr8: .equ 8 ; Protection ID 1 176 177pidr1: .equ 8 178 179cr9: .equ 9 ; Protection ID 2 180 181pidr2: .equ 9 182 183cr10: .equ 10 184 185ccr: .equ 10 ; Coprocessor Configuration Register 186 187cr11: .equ 11 188 189sar: .equ 11 ; Shift Amount Register 190 191cr12: .equ 12 192 193pidr3: .equ 12 ; Protection ID 3 194 195cr13: .equ 13 196 197pidr4: .equ 13 ; Protection ID 4 198 199cr14: .equ 14 200 201iva: .equ 14 ; Interrupt Vector Address 202 203cr15: .equ 15 204 205eiem: .equ 15 ; External Interrupt Enable Mask 206 207cr16: .equ 16 208 209itmr: .equ 16 ; Interval Timer 210 211cr17: .equ 17 212 213pcsq: .equ 17 ; Program Counter Space queue 214 215cr18: .equ 18 216 217pcoq: .equ 18 ; Program Counter Offset queue 218 219cr19: .equ 19 220 221iir: .equ 19 ; Interruption Instruction Register 222 223cr20: .equ 20 224 225isr: .equ 20 ; Interruption Space Register 226 227cr21: .equ 21 228 229ior: .equ 21 ; Interruption Offset Register 230 231cr22: .equ 22 232 233ipsw: .equ 22 ; Interruption Processor Status Word 234 235cr23: .equ 23 236 237eirr: .equ 23 ; External Interrupt Request 238 239cr24: .equ 24 240 241ppda: .equ 24 ; Physical Page Directory Address 242 243tr0: .equ 24 ; Temporary register 0 244 245cr25: .equ 25 246 247hta: .equ 25 ; Hash Table Address 248 249tr1: .equ 25 ; Temporary register 1 250 251cr26: .equ 26 252 253tr2: .equ 26 ; Temporary register 2 254 255cr27: .equ 27 256 257tr3: .equ 27 ; Temporary register 3 258 259cr28: .equ 28 260 261tr4: .equ 28 ; Temporary register 4 262 263cr29: .equ 29 264 265tr5: .equ 29 ; Temporary register 5 266 267cr30: .equ 30 268 269tr6: .equ 30 ; Temporary register 6 270 271cr31: .equ 31 272 273tr7: .equ 31 ; Temporary register 7 274 275;~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 276; Procedure Call Convention ~ 277; Register Definitions for Use with Assembler ~ 278; version A.08.06 ~ 279;~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 280; Software Architecture General Registers 281rp: .equ r2 ; return pointer 282 283mrp: .equ r31 ; millicode return pointer 284 285ret0: .equ r28 ; return value 286 287ret1: .equ r29 ; return value (high part of double) 288 289sl: .equ r29 ; static link 290 291sp: .equ r30 ; stack pointer 292 293dp: .equ r27 ; data pointer 294 295arg0: .equ r26 ; argument 296 297arg1: .equ r25 ; argument or high part of double argument 298 299arg2: .equ r24 ; argument 300 301arg3: .equ r23 ; argument or high part of double argument 302 303;_____________________________________________________________________________ 304; Software Architecture Space Registers 305; sr0 ; return link form BLE 306sret: .equ sr1 ; return value 307 308sarg: .equ sr1 ; argument 309 310; sr4 ; PC SPACE tracker 311; sr5 ; process private data 312;_____________________________________________________________________________ 313; Software Architecture Pseudo Registers 314previous_sp: .equ 64 ; old stack pointer (locates previous frame) 315 316;~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 317; Standard space and subspace definitions. version A.08.06 318; These are generally suitable for programs on HP_UX and HPE. 319; Statements commented out are used when building such things as operating 320; system kernels. 321;;;;;;;;;;;;;;;; 322; Additional code subspaces should have ALIGN=8 for an interspace BV 323; and should have SORT=24. 324; 325; For an incomplete executable (program bound to shared libraries), 326; sort keys $GLOBAL$ -1 and $GLOBAL$ -2 are reserved for the $DLT$ 327; and $PLT$ subspaces respectively. 328;;;;;;;;;;;;;;; 329 330 .text 331 .EXPORT $$remI,millicode 332; .IMPORT cerror 333$$remI: 334 .PROC 335 .CALLINFO NO_CALLS 336 .ENTRY 337 addit,= 0,arg1,r0 338 add,>= r0,arg0,ret1 339 sub r0,ret1,ret1 340 sub r0,arg1,r1 341 ds r0,r1,r0 342 or r0,r0,r1 343 add ret1,ret1,ret1 344 ds r1,arg1,r1 345 addc ret1,ret1,ret1 346 ds r1,arg1,r1 347 addc ret1,ret1,ret1 348 ds r1,arg1,r1 349 addc ret1,ret1,ret1 350 ds r1,arg1,r1 351 addc ret1,ret1,ret1 352 ds r1,arg1,r1 353 addc ret1,ret1,ret1 354 ds r1,arg1,r1 355 addc ret1,ret1,ret1 356 ds r1,arg1,r1 357 addc ret1,ret1,ret1 358 ds r1,arg1,r1 359 addc ret1,ret1,ret1 360 ds r1,arg1,r1 361 addc ret1,ret1,ret1 362 ds r1,arg1,r1 363 addc ret1,ret1,ret1 364 ds r1,arg1,r1 365 addc ret1,ret1,ret1 366 ds r1,arg1,r1 367 addc ret1,ret1,ret1 368 ds r1,arg1,r1 369 addc ret1,ret1,ret1 370 ds r1,arg1,r1 371 addc ret1,ret1,ret1 372 ds r1,arg1,r1 373 addc ret1,ret1,ret1 374 ds r1,arg1,r1 375 addc ret1,ret1,ret1 376 ds r1,arg1,r1 377 addc ret1,ret1,ret1 378 ds r1,arg1,r1 379 addc ret1,ret1,ret1 380 ds r1,arg1,r1 381 addc ret1,ret1,ret1 382 ds r1,arg1,r1 383 addc ret1,ret1,ret1 384 ds r1,arg1,r1 385 addc ret1,ret1,ret1 386 ds r1,arg1,r1 387 addc ret1,ret1,ret1 388 ds r1,arg1,r1 389 addc ret1,ret1,ret1 390 ds r1,arg1,r1 391 addc ret1,ret1,ret1 392 ds r1,arg1,r1 393 addc ret1,ret1,ret1 394 ds r1,arg1,r1 395 addc ret1,ret1,ret1 396 ds r1,arg1,r1 397 addc ret1,ret1,ret1 398 ds r1,arg1,r1 399 addc ret1,ret1,ret1 400 ds r1,arg1,r1 401 addc ret1,ret1,ret1 402 ds r1,arg1,r1 403 addc ret1,ret1,ret1 404 ds r1,arg1,r1 405 addc ret1,ret1,ret1 406 ds r1,arg1,r1 407 addc ret1,ret1,ret1 408 movb,>=,n r1,ret1,remI300 409 add,< arg1,r0,r0 410 add,tr r1,arg1,ret1 411 sub r1,arg1,ret1 412remI300: add,>= arg0,r0,r0 413 414 sub r0,ret1,ret1 415 bv r0(r31) 416 nop 417 .EXIT 418 .PROCEND 419 420bit1: .equ 1 421 422bit30: .equ 30 423bit31: .equ 31 424 425len2: .equ 2 426 427len4: .equ 4 428 429#if 0 430$$dyncall: 431 .proc 432 .callinfo NO_CALLS 433 .export $$dyncall,MILLICODE 434 435 bb,>=,n 22,bit30,noshlibs 436 437 depi 0,bit31,len2,22 438 ldw 4(22),19 439 ldw 0(22),22 440noshlibs: 441 ldsid (22),r1 442 mtsp r1,sr0 443 be 0(sr0,r22) 444 stw rp,-24(sp) 445 .procend 446 447$$sh_func_adrs: 448 .proc 449 .callinfo NO_CALLS 450 .export $$sh_func_adrs, millicode 451 ldo 0(r26),ret1 452 dep r0,30,1,r26 453 probew (r26),r31,r22 454 extru,= r22,31,1,r22 455 bv r0(r31) 456 ldws 0(r26),ret1 457 .procend 458#endif 459 460temp: .EQU r1 461 462retreg: .EQU ret1 ; r29 463 464 .export $$divU,millicode 465 .import $$divU_3,millicode 466 .import $$divU_5,millicode 467 .import $$divU_6,millicode 468 .import $$divU_7,millicode 469 .import $$divU_9,millicode 470 .import $$divU_10,millicode 471 .import $$divU_12,millicode 472 .import $$divU_14,millicode 473 .import $$divU_15,millicode 474$$divU: 475 .proc 476 .callinfo NO_CALLS 477; The subtract is not nullified since it does no harm and can be used 478; by the two cases that branch back to "normal". 479 comib,>= 15,arg1,special_divisor 480 sub r0,arg1,temp ; clear carry, negate the divisor 481 ds r0,temp,r0 ; set V-bit to 1 482normal: 483 add arg0,arg0,retreg ; shift msb bit into carry 484 ds r0,arg1,temp ; 1st divide step, if no carry 485 addc retreg,retreg,retreg ; shift retreg with/into carry 486 ds temp,arg1,temp ; 2nd divide step 487 addc retreg,retreg,retreg ; shift retreg with/into carry 488 ds temp,arg1,temp ; 3rd divide step 489 addc retreg,retreg,retreg ; shift retreg with/into carry 490 ds temp,arg1,temp ; 4th divide step 491 addc retreg,retreg,retreg ; shift retreg with/into carry 492 ds temp,arg1,temp ; 5th divide step 493 addc retreg,retreg,retreg ; shift retreg with/into carry 494 ds temp,arg1,temp ; 6th divide step 495 addc retreg,retreg,retreg ; shift retreg with/into carry 496 ds temp,arg1,temp ; 7th divide step 497 addc retreg,retreg,retreg ; shift retreg with/into carry 498 ds temp,arg1,temp ; 8th divide step 499 addc retreg,retreg,retreg ; shift retreg with/into carry 500 ds temp,arg1,temp ; 9th divide step 501 addc retreg,retreg,retreg ; shift retreg with/into carry 502 ds temp,arg1,temp ; 10th divide step 503 addc retreg,retreg,retreg ; shift retreg with/into carry 504 ds temp,arg1,temp ; 11th divide step 505 addc retreg,retreg,retreg ; shift retreg with/into carry 506 ds temp,arg1,temp ; 12th divide step 507 addc retreg,retreg,retreg ; shift retreg with/into carry 508 ds temp,arg1,temp ; 13th divide step 509 addc retreg,retreg,retreg ; shift retreg with/into carry 510 ds temp,arg1,temp ; 14th divide step 511 addc retreg,retreg,retreg ; shift retreg with/into carry 512 ds temp,arg1,temp ; 15th divide step 513 addc retreg,retreg,retreg ; shift retreg with/into carry 514 ds temp,arg1,temp ; 16th divide step 515 addc retreg,retreg,retreg ; shift retreg with/into carry 516 ds temp,arg1,temp ; 17th divide step 517 addc retreg,retreg,retreg ; shift retreg with/into carry 518 ds temp,arg1,temp ; 18th divide step 519 addc retreg,retreg,retreg ; shift retreg with/into carry 520 ds temp,arg1,temp ; 19th divide step 521 addc retreg,retreg,retreg ; shift retreg with/into carry 522 ds temp,arg1,temp ; 20th divide step 523 addc retreg,retreg,retreg ; shift retreg with/into carry 524 ds temp,arg1,temp ; 21st divide step 525 addc retreg,retreg,retreg ; shift retreg with/into carry 526 ds temp,arg1,temp ; 22nd divide step 527 addc retreg,retreg,retreg ; shift retreg with/into carry 528 ds temp,arg1,temp ; 23rd divide step 529 addc retreg,retreg,retreg ; shift retreg with/into carry 530 ds temp,arg1,temp ; 24th divide step 531 addc retreg,retreg,retreg ; shift retreg with/into carry 532 ds temp,arg1,temp ; 25th divide step 533 addc retreg,retreg,retreg ; shift retreg with/into carry 534 ds temp,arg1,temp ; 26th divide step 535 addc retreg,retreg,retreg ; shift retreg with/into carry 536 ds temp,arg1,temp ; 27th divide step 537 addc retreg,retreg,retreg ; shift retreg with/into carry 538 ds temp,arg1,temp ; 28th divide step 539 addc retreg,retreg,retreg ; shift retreg with/into carry 540 ds temp,arg1,temp ; 29th divide step 541 addc retreg,retreg,retreg ; shift retreg with/into carry 542 ds temp,arg1,temp ; 30th divide step 543 addc retreg,retreg,retreg ; shift retreg with/into carry 544 ds temp,arg1,temp ; 31st divide step 545 addc retreg,retreg,retreg ; shift retreg with/into carry 546 ds temp,arg1,temp ; 32nd divide step, 547 bv 0(r31) 548 addc retreg,retreg,retreg ; shift last retreg bit into retreg 549;_____________________________________________________________________________ 550; handle the cases where divisor is a small constant or has high bit on 551special_divisor: 552 blr arg1,r0 553 comib,>,n 0,arg1,big_divisor ; nullify previous instruction 554zero_divisor: ; this label is here to provide external visibility 555 556 addit,= 0,arg1,0 ; trap for zero dvr 557 nop 558 bv 0(r31) ; divisor == 1 559 copy arg0,retreg 560 bv 0(r31) ; divisor == 2 561 extru arg0,30,31,retreg 562 b,n $$divU_3 ; divisor == 3 563 nop 564 bv 0(r31) ; divisor == 4 565 extru arg0,29,30,retreg 566 b,n $$divU_5 ; divisor == 5 567 nop 568 b,n $$divU_6 ; divisor == 6 569 nop 570 b,n $$divU_7 ; divisor == 7 571 nop 572 bv 0(r31) ; divisor == 8 573 extru arg0,28,29,retreg 574 b,n $$divU_9 ; divisor == 9 575 nop 576 b,n $$divU_10 ; divisor == 10 577 nop 578 b normal ; divisor == 11 579 ds r0,temp,r0 ; set V-bit to 1 580 b,n $$divU_12 ; divisor == 12 581 nop 582 b normal ; divisor == 13 583 ds r0,temp,r0 ; set V-bit to 1 584 b,n $$divU_14 ; divisor == 14 585 nop 586 b,n $$divU_15 ; divisor == 15 587 nop 588;_____________________________________________________________________________ 589; Handle the case where the high bit is on in the divisor. 590; Compute: if( dividend>=divisor) quotient=1; else quotient=0; 591; Note: dividend>==divisor iff dividend-divisor does not borrow 592; and not borrow iff carry 593big_divisor: 594 sub arg0,arg1,r0 595 bv 0(r31) 596 addc r0,r0,retreg 597 .procend 598 .end 599 600t2: .EQU r1 601 602; x2 .EQU arg0 ; r26 603t1: .EQU arg1 ; r25 604 605; x1 .EQU ret1 ; r29 606;_____________________________________________________________________________ 607 608$$divide_by_constant: 609 .PROC 610 .CALLINFO NO_CALLS 611 .export $$divide_by_constant,millicode 612; Provides a "nice" label for the code covered by the unwind descriptor 613; for things like gprof. 614 615$$divI_2: 616 .EXPORT $$divI_2,MILLICODE 617 COMCLR,>= arg0,0,0 618 ADDI 1,arg0,arg0 619 bv 0(r31) 620 EXTRS arg0,30,31,ret1 621 622$$divI_4: 623 .EXPORT $$divI_4,MILLICODE 624 COMCLR,>= arg0,0,0 625 ADDI 3,arg0,arg0 626 bv 0(r31) 627 EXTRS arg0,29,30,ret1 628 629$$divI_8: 630 .EXPORT $$divI_8,MILLICODE 631 COMCLR,>= arg0,0,0 632 ADDI 7,arg0,arg0 633 bv 0(r31) 634 EXTRS arg0,28,29,ret1 635 636$$divI_16: 637 .EXPORT $$divI_16,MILLICODE 638 COMCLR,>= arg0,0,0 639 ADDI 15,arg0,arg0 640 bv 0(r31) 641 EXTRS arg0,27,28,ret1 642 643$$divI_3: 644 .EXPORT $$divI_3,MILLICODE 645 COMB,<,N arg0,0,$neg3 646 647 ADDI 1,arg0,arg0 648 EXTRU arg0,1,2,ret1 649 SH2ADD arg0,arg0,arg0 650 B $pos 651 ADDC ret1,0,ret1 652 653$neg3: 654 SUBI 1,arg0,arg0 655 EXTRU arg0,1,2,ret1 656 SH2ADD arg0,arg0,arg0 657 B $neg 658 ADDC ret1,0,ret1 659 660$$divU_3: 661 .EXPORT $$divU_3,MILLICODE 662 ADDI 1,arg0,arg0 663 ADDC 0,0,ret1 664 SHD ret1,arg0,30,t1 665 SH2ADD arg0,arg0,arg0 666 B $pos 667 ADDC ret1,t1,ret1 668 669$$divI_5: 670 .EXPORT $$divI_5,MILLICODE 671 COMB,<,N arg0,0,$neg5 672 ADDI 3,arg0,t1 673 SH1ADD arg0,t1,arg0 674 B $pos 675 ADDC 0,0,ret1 676 677$neg5: 678 SUB 0,arg0,arg0 679 ADDI 1,arg0,arg0 680 SHD 0,arg0,31,ret1 681 SH1ADD arg0,arg0,arg0 682 B $neg 683 ADDC ret1,0,ret1 684 685$$divU_5: 686 .EXPORT $$divU_5,MILLICODE 687 ADDI 1,arg0,arg0 688 ADDC 0,0,ret1 689 SHD ret1,arg0,31,t1 690 SH1ADD arg0,arg0,arg0 691 B $pos 692 ADDC t1,ret1,ret1 693 694$$divI_6: 695 .EXPORT $$divI_6,MILLICODE 696 COMB,<,N arg0,0,$neg6 697 EXTRU arg0,30,31,arg0 698 ADDI 5,arg0,t1 699 SH2ADD arg0,t1,arg0 700 B $pos 701 ADDC 0,0,ret1 702 703$neg6: 704 SUBI 2,arg0,arg0 705 EXTRU arg0,30,31,arg0 706 SHD 0,arg0,30,ret1 707 SH2ADD arg0,arg0,arg0 708 B $neg 709 ADDC ret1,0,ret1 710 711$$divU_6: 712 .EXPORT $$divU_6,MILLICODE 713 EXTRU arg0,30,31,arg0 714 ADDI 1,arg0,arg0 715 SHD 0,arg0,30,ret1 716 SH2ADD arg0,arg0,arg0 717 B $pos 718 ADDC ret1,0,ret1 719 720$$divU_10: 721 .EXPORT $$divU_10,MILLICODE 722 EXTRU arg0,30,31,arg0 723 ADDI 3,arg0,t1 724 SH1ADD arg0,t1,arg0 725 ADDC 0,0,ret1 726$pos: 727 SHD ret1,arg0,28,t1 728 SHD arg0,0,28,t2 729 ADD arg0,t2,arg0 730 ADDC ret1,t1,ret1 731$pos_for_17: 732 SHD ret1,arg0,24,t1 733 SHD arg0,0,24,t2 734 ADD arg0,t2,arg0 735 ADDC ret1,t1,ret1 736 737 SHD ret1,arg0,16,t1 738 SHD arg0,0,16,t2 739 ADD arg0,t2,arg0 740 bv 0(r31) 741 ADDC ret1,t1,ret1 742 743$$divI_10: 744 .EXPORT $$divI_10,MILLICODE 745 COMB,< arg0,0,$neg10 746 COPY 0,ret1 747 EXTRU arg0,30,31,arg0 748 ADDIB,TR 1,arg0,$pos 749 SH1ADD arg0,arg0,arg0 750 751$neg10: 752 SUBI 2,arg0,arg0 753 EXTRU arg0,30,31,arg0 754 SH1ADD arg0,arg0,arg0 755$neg: 756 SHD ret1,arg0,28,t1 757 SHD arg0,0,28,t2 758 ADD arg0,t2,arg0 759 ADDC ret1,t1,ret1 760$neg_for_17: 761 SHD ret1,arg0,24,t1 762 SHD arg0,0,24,t2 763 ADD arg0,t2,arg0 764 ADDC ret1,t1,ret1 765 766 SHD ret1,arg0,16,t1 767 SHD arg0,0,16,t2 768 ADD arg0,t2,arg0 769 ADDC ret1,t1,ret1 770 bv 0(r31) 771 SUB 0,ret1,ret1 772 773$$divI_12: 774 .EXPORT $$divI_12,MILLICODE 775 COMB,< arg0,0,$neg12 776 COPY 0,ret1 777 EXTRU arg0,29,30,arg0 778 ADDIB,TR 1,arg0,$pos 779 SH2ADD arg0,arg0,arg0 780 781$neg12: 782 SUBI 4,arg0,arg0 783 EXTRU arg0,29,30,arg0 784 B $neg 785 SH2ADD arg0,arg0,arg0 786 787$$divU_12: 788 .EXPORT $$divU_12,MILLICODE 789 EXTRU arg0,29,30,arg0 790 ADDI 5,arg0,t1 791 SH2ADD arg0,t1,arg0 792 B $pos 793 ADDC 0,0,ret1 794 795$$divI_15: 796 .EXPORT $$divI_15,MILLICODE 797 COMB,< arg0,0,$neg15 798 COPY 0,ret1 799 ADDIB,TR 1,arg0,$pos+4 800 SHD ret1,arg0,28,t1 801 802$neg15: 803 B $neg 804 SUBI 1,arg0,arg0 805 806$$divU_15: 807 .EXPORT $$divU_15,MILLICODE 808 ADDI 1,arg0,arg0 809 B $pos 810 ADDC 0,0,ret1 811 812$$divI_17: 813 .EXPORT $$divI_17,MILLICODE 814 COMB,<,N arg0,0,$neg17 815 ADDI 1,arg0,arg0 816 SHD 0,arg0,28,t1 817 SHD arg0,0,28,t2 818 SUB t2,arg0,arg0 819 B $pos_for_17 820 SUBB t1,0,ret1 821 822$neg17: 823 SUBI 1,arg0,arg0 824 SHD 0,arg0,28,t1 825 SHD arg0,0,28,t2 826 SUB t2,arg0,arg0 827 B $neg_for_17 828 SUBB t1,0,ret1 829 830$$divU_17: 831 .EXPORT $$divU_17,MILLICODE 832 ADDI 1,arg0,arg0 833 ADDC 0,0,ret1 834 SHD ret1,arg0,28,t1 835$u17: 836 SHD arg0,0,28,t2 837 SUB t2,arg0,arg0 838 B $pos_for_17 839 SUBB t1,ret1,ret1 840 841$$divI_7: 842 .EXPORT $$divI_7,MILLICODE 843 COMB,<,N arg0,0,$neg7 844$7: 845 ADDI 1,arg0,arg0 846 SHD 0,arg0,29,ret1 847 SH3ADD arg0,arg0,arg0 848 ADDC ret1,0,ret1 849$pos7: 850 SHD ret1,arg0,26,t1 851 SHD arg0,0,26,t2 852 ADD arg0,t2,arg0 853 ADDC ret1,t1,ret1 854 855 SHD ret1,arg0,20,t1 856 SHD arg0,0,20,t2 857 ADD arg0,t2,arg0 858 ADDC ret1,t1,t1 859 860 COPY 0,ret1 861 SHD,= t1,arg0,24,t1 862$1: 863 ADDB,TR t1,ret1,$2 864 EXTRU arg0,31,24,arg0 865 866 bv,n 0(r31) 867 868$2: 869 ADDB,TR t1,arg0,$1 870 EXTRU,= arg0,7,8,t1 871 872$neg7: 873 SUBI 1,arg0,arg0 874$8: 875 SHD 0,arg0,29,ret1 876 SH3ADD arg0,arg0,arg0 877 ADDC ret1,0,ret1 878 879$neg7_shift: 880 SHD ret1,arg0,26,t1 881 SHD arg0,0,26,t2 882 ADD arg0,t2,arg0 883 ADDC ret1,t1,ret1 884 885 SHD ret1,arg0,20,t1 886 SHD arg0,0,20,t2 887 ADD arg0,t2,arg0 888 ADDC ret1,t1,t1 889 890 COPY 0,ret1 891 SHD,= t1,arg0,24,t1 892$3: 893 ADDB,TR t1,ret1,$4 894 EXTRU arg0,31,24,arg0 895 896 bv 0(r31) 897 SUB 0,ret1,ret1 898 899$4: 900 ADDB,TR t1,arg0,$3 901 EXTRU,= arg0,7,8,t1 902 903$$divU_7: 904 .EXPORT $$divU_7,MILLICODE 905 ADDI 1,arg0,arg0 906 ADDC 0,0,ret1 907 SHD ret1,arg0,29,t1 908 SH3ADD arg0,arg0,arg0 909 B $pos7 910 ADDC t1,ret1,ret1 911 912$$divI_9: 913 .EXPORT $$divI_9,MILLICODE 914 COMB,<,N arg0,0,$neg9 915 ADDI 1,arg0,arg0 916 SHD 0,arg0,29,t1 917 SHD arg0,0,29,t2 918 SUB t2,arg0,arg0 919 B $pos7 920 SUBB t1,0,ret1 921 922$neg9: 923 SUBI 1,arg0,arg0 924 SHD 0,arg0,29,t1 925 SHD arg0,0,29,t2 926 SUB t2,arg0,arg0 927 B $neg7_shift 928 SUBB t1,0,ret1 929 930$$divU_9: 931 .EXPORT $$divU_9,MILLICODE 932 ADDI 1,arg0,arg0 933 ADDC 0,0,ret1 934 SHD ret1,arg0,29,t1 935 SHD arg0,0,29,t2 936 SUB t2,arg0,arg0 937 B $pos7 938 SUBB t1,ret1,ret1 939 940$$divI_14: 941 .EXPORT $$divI_14,MILLICODE 942 COMB,<,N arg0,0,$neg14 943$$divU_14: 944 .EXPORT $$divU_14,MILLICODE 945 B $7 946 EXTRU arg0,30,31,arg0 947 948$neg14: 949 SUBI 2,arg0,arg0 950 B $8 951 EXTRU arg0,30,31,arg0 952 953 .PROCEND 954 .END 955 956rmndr: .EQU ret1 ; r29 957 958 .export $$remU,millicode 959$$remU: 960 .proc 961 .callinfo NO_CALLS 962 .entry 963 964 comib,>=,n 0,arg1,special_case 965 sub r0,arg1,rmndr ; clear carry, negate the divisor 966 ds r0,rmndr,r0 ; set V-bit to 1 967 add arg0,arg0,temp ; shift msb bit into carry 968 ds r0,arg1,rmndr ; 1st divide step, if no carry 969 addc temp,temp,temp ; shift temp with/into carry 970 ds rmndr,arg1,rmndr ; 2nd divide step 971 addc temp,temp,temp ; shift temp with/into carry 972 ds rmndr,arg1,rmndr ; 3rd divide step 973 addc temp,temp,temp ; shift temp with/into carry 974 ds rmndr,arg1,rmndr ; 4th divide step 975 addc temp,temp,temp ; shift temp with/into carry 976 ds rmndr,arg1,rmndr ; 5th divide step 977 addc temp,temp,temp ; shift temp with/into carry 978 ds rmndr,arg1,rmndr ; 6th divide step 979 addc temp,temp,temp ; shift temp with/into carry 980 ds rmndr,arg1,rmndr ; 7th divide step 981 addc temp,temp,temp ; shift temp with/into carry 982 ds rmndr,arg1,rmndr ; 8th divide step 983 addc temp,temp,temp ; shift temp with/into carry 984 ds rmndr,arg1,rmndr ; 9th divide step 985 addc temp,temp,temp ; shift temp with/into carry 986 ds rmndr,arg1,rmndr ; 10th divide step 987 addc temp,temp,temp ; shift temp with/into carry 988 ds rmndr,arg1,rmndr ; 11th divide step 989 addc temp,temp,temp ; shift temp with/into carry 990 ds rmndr,arg1,rmndr ; 12th divide step 991 addc temp,temp,temp ; shift temp with/into carry 992 ds rmndr,arg1,rmndr ; 13th divide step 993 addc temp,temp,temp ; shift temp with/into carry 994 ds rmndr,arg1,rmndr ; 14th divide step 995 addc temp,temp,temp ; shift temp with/into carry 996 ds rmndr,arg1,rmndr ; 15th divide step 997 addc temp,temp,temp ; shift temp with/into carry 998 ds rmndr,arg1,rmndr ; 16th divide step 999 addc temp,temp,temp ; shift temp with/into carry 1000 ds rmndr,arg1,rmndr ; 17th divide step 1001 addc temp,temp,temp ; shift temp with/into carry 1002 ds rmndr,arg1,rmndr ; 18th divide step 1003 addc temp,temp,temp ; shift temp with/into carry 1004 ds rmndr,arg1,rmndr ; 19th divide step 1005 addc temp,temp,temp ; shift temp with/into carry 1006 ds rmndr,arg1,rmndr ; 20th divide step 1007 addc temp,temp,temp ; shift temp with/into carry 1008 ds rmndr,arg1,rmndr ; 21st divide step 1009 addc temp,temp,temp ; shift temp with/into carry 1010 ds rmndr,arg1,rmndr ; 22nd divide step 1011 addc temp,temp,temp ; shift temp with/into carry 1012 ds rmndr,arg1,rmndr ; 23rd divide step 1013 addc temp,temp,temp ; shift temp with/into carry 1014 ds rmndr,arg1,rmndr ; 24th divide step 1015 addc temp,temp,temp ; shift temp with/into carry 1016 ds rmndr,arg1,rmndr ; 25th divide step 1017 addc temp,temp,temp ; shift temp with/into carry 1018 ds rmndr,arg1,rmndr ; 26th divide step 1019 addc temp,temp,temp ; shift temp with/into carry 1020 ds rmndr,arg1,rmndr ; 27th divide step 1021 addc temp,temp,temp ; shift temp with/into carry 1022 ds rmndr,arg1,rmndr ; 28th divide step 1023 addc temp,temp,temp ; shift temp with/into carry 1024 ds rmndr,arg1,rmndr ; 29th divide step 1025 addc temp,temp,temp ; shift temp with/into carry 1026 ds rmndr,arg1,rmndr ; 30th divide step 1027 addc temp,temp,temp ; shift temp with/into carry 1028 ds rmndr,arg1,rmndr ; 31st divide step 1029 addc temp,temp,temp ; shift temp with/into carry 1030 ds rmndr,arg1,rmndr ; 32nd divide step, 1031 comiclr,<= 0,rmndr,r0 1032 add rmndr,arg1,rmndr ; correction 1033; .exit 1034 bv,n 0(r31) 1035 nop 1036; Putting >= on the last DS and deleting COMICLR does not work! 1037;_____________________________________________________________________________ 1038special_case: 1039 addit,= 0,arg1,r0 ; trap on div by zero 1040 sub,>>= arg0,arg1,rmndr 1041 copy arg0,rmndr 1042 .exit 1043 bv,n 0(r31) 1044 nop 1045 .procend 1046 .end 1047 1048; Use bv 0(r31) and bv,n 0(r31) instead. 1049; #define return bv 0(%mrp) 1050; #define return_n bv,n 0(%mrp) 1051 1052 .align 16 1053$$mulI: 1054 1055 .proc 1056 .callinfo NO_CALLS 1057 .export $$mulI, millicode 1058 combt,<<= %r25,%r26,l4 ; swap args if unsigned %r25>%r26 1059 copy 0,%r29 ; zero out the result 1060 xor %r26,%r25,%r26 ; swap %r26 & %r25 using the 1061 xor %r26,%r25,%r25 ; old xor trick 1062 xor %r26,%r25,%r26 1063l4: combt,<= 0,%r26,l3 ; if %r26>=0 then proceed like unsigned 1064 1065 zdep %r25,30,8,%r1 ; %r1 = (%r25&0xff)<<1 ********* 1066 sub,> 0,%r25,%r1 ; otherwise negate both and 1067 combt,<=,n %r26,%r1,l2 ; swap back if |%r26|<|%r25| 1068 sub 0,%r26,%r25 1069 movb,tr,n %r1,%r26,l2 ; 10th inst. 1070 1071l0: add %r29,%r1,%r29 ; add in this partial product 1072 1073l1: zdep %r26,23,24,%r26 ; %r26 <<= 8 ****************** 1074 1075l2: zdep %r25,30,8,%r1 ; %r1 = (%r25&0xff)<<1 ********* 1076 1077l3: blr %r1,0 ; case on these 8 bits ****** 1078 1079 extru %r25,23,24,%r25 ; %r25 >>= 8 ****************** 1080 1081;16 insts before this. 1082; %r26 <<= 8 ************************** 1083x0: comb,<> %r25,0,l2 ! zdep %r26,23,24,%r26 ! bv,n 0(r31) ! nop 1084 1085x1: comb,<> %r25,0,l1 ! add %r29,%r26,%r29 ! bv,n 0(r31) ! nop 1086 1087x2: comb,<> %r25,0,l1 ! sh1add %r26,%r29,%r29 ! bv,n 0(r31) ! nop 1088 1089x3: comb,<> %r25,0,l0 ! sh1add %r26,%r26,%r1 ! bv 0(r31) ! add %r29,%r1,%r29 1090 1091x4: comb,<> %r25,0,l1 ! sh2add %r26,%r29,%r29 ! bv,n 0(r31) ! nop 1092 1093x5: comb,<> %r25,0,l0 ! sh2add %r26,%r26,%r1 ! bv 0(r31) ! add %r29,%r1,%r29 1094 1095x6: sh1add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh1add %r1,%r29,%r29 ! bv,n 0(r31) 1096 1097x7: sh1add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh2add %r26,%r29,%r29 ! b,n ret_t0 1098 1099x8: comb,<> %r25,0,l1 ! sh3add %r26,%r29,%r29 ! bv,n 0(r31) ! nop 1100 1101x9: comb,<> %r25,0,l0 ! sh3add %r26,%r26,%r1 ! bv 0(r31) ! add %r29,%r1,%r29 1102 1103x10: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh1add %r1,%r29,%r29 ! bv,n 0(r31) 1104 1105x11: sh1add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh3add %r26,%r29,%r29 ! b,n ret_t0 1106 1107x12: sh1add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh2add %r1,%r29,%r29 ! bv,n 0(r31) 1108 1109x13: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh3add %r26,%r29,%r29 ! b,n ret_t0 1110 1111x14: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1112 1113x15: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh1add %r1,%r1,%r1 ! b,n ret_t0 1114 1115x16: zdep %r26,27,28,%r1 ! comb,<> %r25,0,l1 ! add %r29,%r1,%r29 ! bv,n 0(r31) 1116 1117x17: sh3add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh3add %r26,%r1,%r1 ! b,n ret_t0 1118 1119x18: sh3add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh1add %r1,%r29,%r29 ! bv,n 0(r31) 1120 1121x19: sh3add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh1add %r1,%r26,%r1 ! b,n ret_t0 1122 1123x20: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh2add %r1,%r29,%r29 ! bv,n 0(r31) 1124 1125x21: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh2add %r1,%r26,%r1 ! b,n ret_t0 1126 1127x22: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1128 1129x23: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1130 1131x24: sh1add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh3add %r1,%r29,%r29 ! bv,n 0(r31) 1132 1133x25: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh2add %r1,%r1,%r1 ! b,n ret_t0 1134 1135x26: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1136 1137x27: sh1add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh3add %r1,%r1,%r1 ! b,n ret_t0 1138 1139x28: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1140 1141x29: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1142 1143x30: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1144 1145x31: zdep %r26,26,27,%r1 ! comb,<> %r25,0,l0 ! sub %r1,%r26,%r1 ! b,n ret_t0 1146 1147x32: zdep %r26,26,27,%r1 ! comb,<> %r25,0,l1 ! add %r29,%r1,%r29 ! bv,n 0(r31) 1148 1149x33: sh3add %r26,0,%r1 ! comb,<> %r25,0,l0 ! sh2add %r1,%r26,%r1 ! b,n ret_t0 1150 1151x34: zdep %r26,27,28,%r1 ! add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1152 1153x35: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh3add %r26,%r1,%r1 1154 1155x36: sh3add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh2add %r1,%r29,%r29 ! bv,n 0(r31) 1156 1157x37: sh3add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh2add %r1,%r26,%r1 ! b,n ret_t0 1158 1159x38: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1160 1161x39: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1162 1163x40: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh3add %r1,%r29,%r29 ! bv,n 0(r31) 1164 1165x41: sh2add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh3add %r1,%r26,%r1 ! b,n ret_t0 1166 1167x42: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1168 1169x43: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1170 1171x44: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1172 1173x45: sh3add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! sh2add %r1,%r1,%r1 ! b,n ret_t0 1174 1175x46: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! add %r1,%r26,%r1 1176 1177x47: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh1add %r26,%r1,%r1 1178 1179x48: sh1add %r26,%r26,%r1 ! comb,<> %r25,0,l0 ! zdep %r1,27,28,%r1 ! b,n ret_t0 1180 1181x49: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh2add %r26,%r1,%r1 1182 1183x50: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1184 1185x51: sh3add %r26,%r26,%r1 ! sh3add %r26,%r1,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1186 1187x52: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1188 1189x53: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1190 1191x54: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1192 1193x55: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1194 1195x56: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1196 1197x57: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1198 1199x58: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1200 1201x59: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t02a0 ! sh1add %r1,%r1,%r1 1202 1203x60: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1204 1205x61: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1206 1207x62: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1208 1209x63: zdep %r26,25,26,%r1 ! comb,<> %r25,0,l0 ! sub %r1,%r26,%r1 ! b,n ret_t0 1210 1211x64: zdep %r26,25,26,%r1 ! comb,<> %r25,0,l1 ! add %r29,%r1,%r29 ! bv,n 0(r31) 1212 1213x65: sh3add %r26,0,%r1 ! comb,<> %r25,0,l0 ! sh3add %r1,%r26,%r1 ! b,n ret_t0 1214 1215x66: zdep %r26,26,27,%r1 ! add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1216 1217x67: sh3add %r26,0,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1218 1219x68: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1220 1221x69: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1222 1223x70: zdep %r26,25,26,%r1 ! sh2add %r26,%r1,%r1 ! b e_t0 ! sh1add %r26,%r1,%r1 1224 1225x71: sh3add %r26,%r26,%r1 ! sh3add %r1,0,%r1 ! b e_t0 ! sub %r1,%r26,%r1 1226 1227x72: sh3add %r26,%r26,%r1 ! comb,<> %r25,0,l1 ! sh3add %r1,%r29,%r29 ! bv,n 0(r31) 1228 1229x73: sh3add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_shift ! add %r29,%r1,%r29 1230 1231x74: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1232 1233x75: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1234 1235x76: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1236 1237x77: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1238 1239x78: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r26,%r1 1240 1241x79: zdep %r26,27,28,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sub %r1,%r26,%r1 1242 1243x80: zdep %r26,27,28,%r1 ! sh2add %r1,%r1,%r1 ! b e_shift ! add %r29,%r1,%r29 1244 1245x81: sh3add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_shift ! add %r29,%r1,%r29 1246 1247x82: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1248 1249x83: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1250 1251x84: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1252 1253x85: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1254 1255x86: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r26,%r1 1256 1257x87: sh3add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_t02a0 ! sh2add %r26,%r1,%r1 1258 1259x88: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1260 1261x89: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1262 1263x90: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1264 1265x91: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1266 1267x92: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0 ! sh1add %r1,%r26,%r1 1268 1269x93: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1270 1271x94: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0 ! sh1add %r26,%r1,%r1 1272 1273x95: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1274 1275x96: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1276 1277x97: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1278 1279x98: zdep %r26,26,27,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh1add %r26,%r1,%r1 1280 1281x99: sh3add %r26,0,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1282 1283x100: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1284 1285x101: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1286 1287x102: zdep %r26,26,27,%r1 ! sh1add %r26,%r1,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1288 1289x103: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t02a0 ! sh2add %r1,%r26,%r1 1290 1291x104: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1292 1293x105: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1294 1295x106: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1296 1297x107: sh3add %r26,%r26,%r1 ! sh2add %r26,%r1,%r1 ! b e_t02a0 ! sh3add %r1,%r26,%r1 1298 1299x108: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1300 1301x109: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1302 1303x110: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_2t0 ! sh1add %r1,%r26,%r1 1304 1305x111: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1306 1307x112: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! zdep %r1,27,28,%r1 1308 1309x113: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t02a0 ! sh1add %r1,%r1,%r1 1310 1311x114: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r1,%r1 1312 1313x115: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0a0 ! sh1add %r1,%r1,%r1 1314 1315x116: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0 ! sh2add %r1,%r26,%r1 1316 1317x117: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r1,%r1 1318 1319x118: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0a0 ! sh3add %r1,%r1,%r1 1320 1321x119: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t02a0 ! sh3add %r1,%r1,%r1 1322 1323x120: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1324 1325x121: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1326 1327x122: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1328 1329x123: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1330 1331x124: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1332 1333x125: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1334 1335x126: zdep %r26,25,26,%r1 ! sub %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1336 1337x127: zdep %r26,24,25,%r1 ! comb,<> %r25,0,l0 ! sub %r1,%r26,%r1 ! b,n ret_t0 1338 1339x128: zdep %r26,24,25,%r1 ! comb,<> %r25,0,l1 ! add %r29,%r1,%r29 ! bv,n 0(r31) 1340 1341x129: zdep %r26,24,25,%r1 ! comb,<> %r25,0,l0 ! add %r1,%r26,%r1 ! b,n ret_t0 1342 1343x130: zdep %r26,25,26,%r1 ! add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1344 1345x131: sh3add %r26,0,%r1 ! sh3add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1346 1347x132: sh3add %r26,0,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1348 1349x133: sh3add %r26,0,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1350 1351x134: sh3add %r26,0,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r26,%r1 1352 1353x135: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1354 1355x136: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1356 1357x137: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1358 1359x138: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1360 1361x139: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0a0 ! sh2add %r1,%r26,%r1 1362 1363x140: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0 ! sh2add %r1,%r1,%r1 1364 1365x141: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0a0 ! sh1add %r1,%r26,%r1 1366 1367x142: sh3add %r26,%r26,%r1 ! sh3add %r1,0,%r1 ! b e_2t0 ! sub %r1,%r26,%r1 1368 1369x143: zdep %r26,27,28,%r1 ! sh3add %r1,%r1,%r1 ! b e_t0 ! sub %r1,%r26,%r1 1370 1371x144: sh3add %r26,%r26,%r1 ! sh3add %r1,0,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1372 1373x145: sh3add %r26,%r26,%r1 ! sh3add %r1,0,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1374 1375x146: sh3add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1376 1377x147: sh3add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1378 1379x148: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1380 1381x149: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1382 1383x150: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r26,%r1 1384 1385x151: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0a0 ! sh1add %r1,%r26,%r1 1386 1387x152: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1388 1389x153: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1390 1391x154: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1392 1393x155: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1394 1395x156: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0 ! sh1add %r1,%r26,%r1 1396 1397x157: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_t02a0 ! sh2add %r1,%r1,%r1 1398 1399x158: zdep %r26,27,28,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0 ! sub %r1,%r26,%r1 1400 1401x159: zdep %r26,26,27,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sub %r1,%r26,%r1 1402 1403x160: sh2add %r26,%r26,%r1 ! sh2add %r1,0,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1404 1405x161: sh3add %r26,0,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1406 1407x162: sh3add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1408 1409x163: sh3add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_t0 ! sh1add %r1,%r26,%r1 1410 1411x164: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1412 1413x165: sh3add %r26,0,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1414 1415x166: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r26,%r1 1416 1417x167: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_2t0a0 ! sh1add %r1,%r26,%r1 1418 1419x168: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1420 1421x169: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1422 1423x170: zdep %r26,26,27,%r1 ! sh1add %r26,%r1,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1424 1425x171: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r1,%r1 1426 1427x172: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_4t0 ! sh1add %r1,%r26,%r1 1428 1429x173: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t02a0 ! sh3add %r1,%r1,%r1 1430 1431x174: zdep %r26,26,27,%r1 ! sh1add %r26,%r1,%r1 ! b e_t04a0 ! sh2add %r1,%r1,%r1 1432 1433x175: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_5t0 ! sh1add %r1,%r26,%r1 1434 1435x176: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_8t0 ! add %r1,%r26,%r1 1436 1437x177: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_8t0a0 ! add %r1,%r26,%r1 1438 1439x178: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0 ! sh3add %r1,%r26,%r1 1440 1441x179: sh2add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0a0 ! sh3add %r1,%r26,%r1 1442 1443x180: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1444 1445x181: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1446 1447x182: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0 ! sh1add %r1,%r26,%r1 1448 1449x183: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0a0 ! sh1add %r1,%r26,%r1 1450 1451x184: sh2add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_4t0 ! add %r1,%r26,%r1 1452 1453x185: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1454 1455x186: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r1,%r1 1456 1457x187: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t02a0 ! sh2add %r1,%r1,%r1 1458 1459x188: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_4t0 ! sh1add %r26,%r1,%r1 1460 1461x189: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r1,%r1 1462 1463x190: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_2t0 ! sh2add %r1,%r1,%r1 1464 1465x191: zdep %r26,25,26,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sub %r1,%r26,%r1 1466 1467x192: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1468 1469x193: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1470 1471x194: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1472 1473x195: sh3add %r26,0,%r1 ! sh3add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1474 1475x196: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_4t0 ! sh1add %r1,%r26,%r1 1476 1477x197: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_4t0a0 ! sh1add %r1,%r26,%r1 1478 1479x198: zdep %r26,25,26,%r1 ! sh1add %r26,%r1,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1480 1481x199: sh3add %r26,0,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0a0 ! sh1add %r1,%r1,%r1 1482 1483x200: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1484 1485x201: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1486 1487x202: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1488 1489x203: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0a0 ! sh2add %r1,%r26,%r1 1490 1491x204: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0 ! sh1add %r1,%r1,%r1 1492 1493x205: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1494 1495x206: zdep %r26,25,26,%r1 ! sh2add %r26,%r1,%r1 ! b e_t02a0 ! sh1add %r1,%r1,%r1 1496 1497x207: sh3add %r26,0,%r1 ! sh1add %r1,%r26,%r1 ! b e_3t0 ! sh2add %r1,%r26,%r1 1498 1499x208: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_8t0 ! add %r1,%r26,%r1 1500 1501x209: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_8t0a0 ! add %r1,%r26,%r1 1502 1503x210: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0 ! sh2add %r1,%r1,%r1 1504 1505x211: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0a0 ! sh2add %r1,%r1,%r1 1506 1507x212: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_4t0 ! sh2add %r1,%r26,%r1 1508 1509x213: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_4t0a0 ! sh2add %r1,%r26,%r1 1510 1511x214: sh3add %r26,%r26,%r1 ! sh2add %r26,%r1,%r1 ! b e2t04a0 ! sh3add %r1,%r26,%r1 1512 1513x215: sh2add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_5t0 ! sh1add %r1,%r26,%r1 1514 1515x216: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1516 1517x217: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1518 1519x218: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_2t0 ! sh2add %r1,%r26,%r1 1520 1521x219: sh3add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1522 1523x220: sh1add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_4t0 ! sh1add %r1,%r26,%r1 1524 1525x221: sh1add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_4t0a0 ! sh1add %r1,%r26,%r1 1526 1527x222: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r1,%r1 1528 1529x223: sh3add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0a0 ! sh1add %r1,%r1,%r1 1530 1531x224: sh3add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_8t0 ! add %r1,%r26,%r1 1532 1533x225: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0 ! sh2add %r1,%r1,%r1 1534 1535x226: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_t02a0 ! zdep %r1,26,27,%r1 1536 1537x227: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_t02a0 ! sh2add %r1,%r1,%r1 1538 1539x228: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0 ! sh1add %r1,%r1,%r1 1540 1541x229: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_4t0a0 ! sh1add %r1,%r1,%r1 1542 1543x230: sh3add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_5t0 ! add %r1,%r26,%r1 1544 1545x231: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_3t0 ! sh2add %r1,%r26,%r1 1546 1547x232: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_8t0 ! sh2add %r1,%r26,%r1 1548 1549x233: sh1add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e_8t0a0 ! sh2add %r1,%r26,%r1 1550 1551x234: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0 ! sh3add %r1,%r1,%r1 1552 1553x235: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e_2t0a0 ! sh3add %r1,%r1,%r1 1554 1555x236: sh3add %r26,%r26,%r1 ! sh1add %r1,%r26,%r1 ! b e4t08a0 ! sh1add %r1,%r1,%r1 1556 1557x237: zdep %r26,27,28,%r1 ! sh2add %r1,%r1,%r1 ! b e_3t0 ! sub %r1,%r26,%r1 1558 1559x238: sh1add %r26,%r26,%r1 ! sh2add %r1,%r26,%r1 ! b e2t04a0 ! sh3add %r1,%r1,%r1 1560 1561x239: zdep %r26,27,28,%r1 ! sh2add %r1,%r1,%r1 ! b e_t0ma0 ! sh1add %r1,%r1,%r1 1562 1563x240: sh3add %r26,%r26,%r1 ! add %r1,%r26,%r1 ! b e_8t0 ! sh1add %r1,%r1,%r1 1564 1565x241: sh3add %r26,%r26,%r1 ! add %r1,%r26,%r1 ! b e_8t0a0 ! sh1add %r1,%r1,%r1 1566 1567x242: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_2t0 ! sh3add %r1,%r26,%r1 1568 1569x243: sh3add %r26,%r26,%r1 ! sh3add %r1,%r1,%r1 ! b e_t0 ! sh1add %r1,%r1,%r1 1570 1571x244: sh2add %r26,%r26,%r1 ! sh1add %r1,%r1,%r1 ! b e_4t0 ! sh2add %r1,%r26,%r1 1572 1573x245: sh3add %r26,0,%r1 ! sh1add %r1,%r1,%r1 ! b e_5t0 ! sh1add %r1,%r26,%r1 1574 1575x246: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_2t0 ! sh1add %r1,%r1,%r1 1576 1577x247: sh2add %r26,%r26,%r1 ! sh3add %r1,%r26,%r1 ! b e_2t0a0 ! sh1add %r1,%r1,%r1 1578 1579x248: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_shift ! sh3add %r1,%r29,%r29 1580 1581x249: zdep %r26,26,27,%r1 ! sub %r1,%r26,%r1 ! b e_t0 ! sh3add %r1,%r26,%r1 1582 1583x250: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0 ! sh2add %r1,%r1,%r1 1584 1585x251: sh2add %r26,%r26,%r1 ! sh2add %r1,%r1,%r1 ! b e_2t0a0 ! sh2add %r1,%r1,%r1 1586 1587x252: zdep %r26,25,26,%r1 ! sub %r1,%r26,%r1 ! b e_shift ! sh2add %r1,%r29,%r29 1588 1589x253: zdep %r26,25,26,%r1 ! sub %r1,%r26,%r1 ! b e_t0 ! sh2add %r1,%r26,%r1 1590 1591x254: zdep %r26,24,25,%r1 ! sub %r1,%r26,%r1 ! b e_shift ! sh1add %r1,%r29,%r29 1592 1593x255: zdep %r26,23,24,%r1 ! comb,<> %r25,0,l0 ! sub %r1,%r26,%r1 ! b,n ret_t0 1594 1595;1040 insts before this. 1596ret_t0: bv 0(r31) 1597 1598e_t0: add %r29,%r1,%r29 1599 1600e_shift: comb,<> %r25,0,l2 1601 1602 zdep %r26,23,24,%r26 ; %r26 <<= 8 *********** 1603 bv,n 0(r31) 1604e_t0ma0: comb,<> %r25,0,l0 1605 1606 sub %r1,%r26,%r1 1607 bv 0(r31) 1608 add %r29,%r1,%r29 1609e_t0a0: comb,<> %r25,0,l0 1610 1611 add %r1,%r26,%r1 1612 bv 0(r31) 1613 add %r29,%r1,%r29 1614e_t02a0: comb,<> %r25,0,l0 1615 1616 sh1add %r26,%r1,%r1 1617 bv 0(r31) 1618 add %r29,%r1,%r29 1619e_t04a0: comb,<> %r25,0,l0 1620 1621 sh2add %r26,%r1,%r1 1622 bv 0(r31) 1623 add %r29,%r1,%r29 1624e_2t0: comb,<> %r25,0,l1 1625 1626 sh1add %r1,%r29,%r29 1627 bv,n 0(r31) 1628e_2t0a0: comb,<> %r25,0,l0 1629 1630 sh1add %r1,%r26,%r1 1631 bv 0(r31) 1632 add %r29,%r1,%r29 1633e2t04a0: sh1add %r26,%r1,%r1 1634 1635 comb,<> %r25,0,l1 1636 sh1add %r1,%r29,%r29 1637 bv,n 0(r31) 1638e_3t0: comb,<> %r25,0,l0 1639 1640 sh1add %r1,%r1,%r1 1641 bv 0(r31) 1642 add %r29,%r1,%r29 1643e_4t0: comb,<> %r25,0,l1 1644 1645 sh2add %r1,%r29,%r29 1646 bv,n 0(r31) 1647e_4t0a0: comb,<> %r25,0,l0 1648 1649 sh2add %r1,%r26,%r1 1650 bv 0(r31) 1651 add %r29,%r1,%r29 1652e4t08a0: sh1add %r26,%r1,%r1 1653 1654 comb,<> %r25,0,l1 1655 sh2add %r1,%r29,%r29 1656 bv,n 0(r31) 1657e_5t0: comb,<> %r25,0,l0 1658 1659 sh2add %r1,%r1,%r1 1660 bv 0(r31) 1661 add %r29,%r1,%r29 1662e_8t0: comb,<> %r25,0,l1 1663 1664 sh3add %r1,%r29,%r29 1665 bv,n 0(r31) 1666e_8t0a0: comb,<> %r25,0,l0 1667 1668 sh3add %r1,%r26,%r1 1669 bv 0(r31) 1670 add %r29,%r1,%r29 1671 1672 .procend 1673 .end 1674 1675 .import $$divI_2,millicode 1676 .import $$divI_3,millicode 1677 .import $$divI_4,millicode 1678 .import $$divI_5,millicode 1679 .import $$divI_6,millicode 1680 .import $$divI_7,millicode 1681 .import $$divI_8,millicode 1682 .import $$divI_9,millicode 1683 .import $$divI_10,millicode 1684 .import $$divI_12,millicode 1685 .import $$divI_14,millicode 1686 .import $$divI_15,millicode 1687 .export $$divI,millicode 1688 .export $$divoI,millicode 1689$$divoI: 1690 .proc 1691 .callinfo NO_CALLS 1692 comib,=,n -1,arg1,negative1 ; when divisor == -1 1693$$divI: 1694 comib,>>=,n 15,arg1,small_divisor 1695 add,>= 0,arg0,retreg ; move dividend, if retreg < 0, 1696normal1: 1697 sub 0,retreg,retreg ; make it positive 1698 sub 0,arg1,temp ; clear carry, 1699 ; negate the divisor 1700 ds 0,temp,0 ; set V-bit to the comple- 1701 ; ment of the divisor sign 1702 add retreg,retreg,retreg ; shift msb bit into carry 1703 ds r0,arg1,temp ; 1st divide step, if no carry 1704 addc retreg,retreg,retreg ; shift retreg with/into carry 1705 ds temp,arg1,temp ; 2nd divide step 1706 addc retreg,retreg,retreg ; shift retreg with/into carry 1707 ds temp,arg1,temp ; 3rd divide step 1708 addc retreg,retreg,retreg ; shift retreg with/into carry 1709 ds temp,arg1,temp ; 4th divide step 1710 addc retreg,retreg,retreg ; shift retreg with/into carry 1711 ds temp,arg1,temp ; 5th divide step 1712 addc retreg,retreg,retreg ; shift retreg with/into carry 1713 ds temp,arg1,temp ; 6th divide step 1714 addc retreg,retreg,retreg ; shift retreg with/into carry 1715 ds temp,arg1,temp ; 7th divide step 1716 addc retreg,retreg,retreg ; shift retreg with/into carry 1717 ds temp,arg1,temp ; 8th divide step 1718 addc retreg,retreg,retreg ; shift retreg with/into carry 1719 ds temp,arg1,temp ; 9th divide step 1720 addc retreg,retreg,retreg ; shift retreg with/into carry 1721 ds temp,arg1,temp ; 10th divide step 1722 addc retreg,retreg,retreg ; shift retreg with/into carry 1723 ds temp,arg1,temp ; 11th divide step 1724 addc retreg,retreg,retreg ; shift retreg with/into carry 1725 ds temp,arg1,temp ; 12th divide step 1726 addc retreg,retreg,retreg ; shift retreg with/into carry 1727 ds temp,arg1,temp ; 13th divide step 1728 addc retreg,retreg,retreg ; shift retreg with/into carry 1729 ds temp,arg1,temp ; 14th divide step 1730 addc retreg,retreg,retreg ; shift retreg with/into carry 1731 ds temp,arg1,temp ; 15th divide step 1732 addc retreg,retreg,retreg ; shift retreg with/into carry 1733 ds temp,arg1,temp ; 16th divide step 1734 addc retreg,retreg,retreg ; shift retreg with/into carry 1735 ds temp,arg1,temp ; 17th divide step 1736 addc retreg,retreg,retreg ; shift retreg with/into carry 1737 ds temp,arg1,temp ; 18th divide step 1738 addc retreg,retreg,retreg ; shift retreg with/into carry 1739 ds temp,arg1,temp ; 19th divide step 1740 addc retreg,retreg,retreg ; shift retreg with/into carry 1741 ds temp,arg1,temp ; 20th divide step 1742 addc retreg,retreg,retreg ; shift retreg with/into carry 1743 ds temp,arg1,temp ; 21st divide step 1744 addc retreg,retreg,retreg ; shift retreg with/into carry 1745 ds temp,arg1,temp ; 22nd divide step 1746 addc retreg,retreg,retreg ; shift retreg with/into carry 1747 ds temp,arg1,temp ; 23rd divide step 1748 addc retreg,retreg,retreg ; shift retreg with/into carry 1749 ds temp,arg1,temp ; 24th divide step 1750 addc retreg,retreg,retreg ; shift retreg with/into carry 1751 ds temp,arg1,temp ; 25th divide step 1752 addc retreg,retreg,retreg ; shift retreg with/into carry 1753 ds temp,arg1,temp ; 26th divide step 1754 addc retreg,retreg,retreg ; shift retreg with/into carry 1755 ds temp,arg1,temp ; 27th divide step 1756 addc retreg,retreg,retreg ; shift retreg with/into carry 1757 ds temp,arg1,temp ; 28th divide step 1758 addc retreg,retreg,retreg ; shift retreg with/into carry 1759 ds temp,arg1,temp ; 29th divide step 1760 addc retreg,retreg,retreg ; shift retreg with/into carry 1761 ds temp,arg1,temp ; 30th divide step 1762 addc retreg,retreg,retreg ; shift retreg with/into carry 1763 ds temp,arg1,temp ; 31st divide step 1764 addc retreg,retreg,retreg ; shift retreg with/into carry 1765 ds temp,arg1,temp ; 32nd divide step, 1766 addc retreg,retreg,retreg ; shift last retreg bit into retreg 1767 xor,>= arg0,arg1,0 ; get correct sign of quotient 1768 sub 0,retreg,retreg ; based on operand signs 1769 bv,n 0(r31) 1770 nop 1771;______________________________________________________________________ 1772small_divisor: 1773 blr,n arg1,r0 1774 nop 1775; table for divisor == 0,1, ... ,15 1776 addit,= 0,arg1,r0 ; trap if divisor == 0 1777 nop 1778 bv 0(r31) ; divisor == 1 1779 copy arg0,retreg 1780 b,n $$divI_2 ; divisor == 2 1781 nop 1782 b,n $$divI_3 ; divisor == 3 1783 nop 1784 b,n $$divI_4 ; divisor == 4 1785 nop 1786 b,n $$divI_5 ; divisor == 5 1787 nop 1788 b,n $$divI_6 ; divisor == 6 1789 nop 1790 b,n $$divI_7 ; divisor == 7 1791 nop 1792 b,n $$divI_8 ; divisor == 8 1793 nop 1794 b,n $$divI_9 ; divisor == 9 1795 nop 1796 b,n $$divI_10 ; divisor == 10 1797 nop 1798 b normal1 ; divisor == 11 1799 add,>= 0,arg0,retreg 1800 b,n $$divI_12 ; divisor == 12 1801 nop 1802 b normal1 ; divisor == 13 1803 add,>= 0,arg0,retreg 1804 b,n $$divI_14 ; divisor == 14 1805 nop 1806 b,n $$divI_15 ; divisor == 15 1807 nop 1808;______________________________________________________________________ 1809negative1: 1810 sub 0,arg0,retreg ; result is negation of dividend 1811 bv 0(r31) 1812 addo arg0,arg1,r0 ; trap iff dividend==0x80000000 && divisor==-1 1813 .procend 1814 .end 1815