1*24565Szliu# 2*24565Szliu# Copyright (c) 1985 Regents of the University of California. 3*24565Szliu# 4*24565Szliu# Use and reproduction of this software are granted in accordance with 5*24565Szliu# the terms and conditions specified in the Berkeley Software License 6*24565Szliu# Agreement (in particular, this entails acknowledgement of the programs' 7*24565Szliu# source, and inclusion of this notice) with the additional understanding 8*24565Szliu# that all recipients should regard themselves as participants in an 9*24565Szliu# ongoing research project and hence should feel obligated to report 10*24565Szliu# their experiences (good or bad) with these elementary function codes, 11*24565Szliu# using "sendbug 4bsd-bugs@BERKELEY", to the authors. 12*24565Szliu# 13*24565Szliu 14*24565Szliu# @(#)argred.s 1.1 (ELEFUNT) 09/06/85 15*24565Szliu 16*24565Szliu# libm$argred implements Bob Corbett's argument reduction and 17*24565Szliu# libm$sincos implements Peter Tang's double precision sin/cos. 18*24565Szliu# 19*24565Szliu# Note: The two entry points libm$argred and libm$sincos are meant 20*24565Szliu# to be used only by _sin, _cos and _tan. 21*24565Szliu# 22*24565Szliu# method: true range reduction to [-pi/4,pi/4], P. Tang & B. Corbett 23*24565Szliu# S. McDonald, April 4, 1985 24*24565Szliu# 25*24565Szliu .globl libm$argred 26*24565Szliu .globl libm$sincos 27*24565Szliu .text 28*24565Szliu .align 1 29*24565Szliu 30*24565Szliulibm$argred: 31*24565Szliu# 32*24565Szliu# Compare the argument with the largest possible that can 33*24565Szliu# be reduced by table lookup. r3 := |x| will be used in table_lookup . 34*24565Szliu# 35*24565Szliu movd r0,r3 36*24565Szliu bgeq abs1 37*24565Szliu mnegd r3,r3 38*24565Szliuabs1: 39*24565Szliu cmpd r3,$0d+4.55530934770520019583e+01 40*24565Szliu blss small_arg 41*24565Szliu jsb trigred 42*24565Szliu rsb 43*24565Szliusmall_arg: 44*24565Szliu jsb table_lookup 45*24565Szliu rsb 46*24565Szliu# 47*24565Szliu# At this point, 48*24565Szliu# r0 contains the quadrant number, 0, 1, 2, or 3; 49*24565Szliu# r2/r1 contains the reduced argument as a D-format number; 50*24565Szliu# r3 contains a F-format extension to the reduced argument; 51*24565Szliu# r4 contains a 0 or 1 corresponding to a sin or cos entry. 52*24565Szliu# 53*24565Szliulibm$sincos: 54*24565Szliu# 55*24565Szliu# Compensate for a cosine entry by adding one to the quadrant number. 56*24565Szliu# 57*24565Szliu addl2 r4,r0 58*24565Szliu# 59*24565Szliu# Polyd clobbers r5-r0 ; save X in r7/r6 . 60*24565Szliu# This can be avoided by rewriting trigred . 61*24565Szliu# 62*24565Szliu movd r1,r6 63*24565Szliu# 64*24565Szliu# Likewise, save alpha in r8 . 65*24565Szliu# This can be avoided by rewriting trigred . 66*24565Szliu# 67*24565Szliu movf r3,r8 68*24565Szliu# 69*24565Szliu# Odd or even quadrant? cosine if odd, sine otherwise. 70*24565Szliu# Save floor(quadrant/2) in r9 ; it determines the final sign. 71*24565Szliu# 72*24565Szliu rotl $-1,r0,r9 73*24565Szliu blss cosine 74*24565Szliusine: 75*24565Szliu muld2 r1,r1 # Xsq = X * X 76*24565Szliu polyd r1,$7,sin_coef # Q = P(Xsq) , of deg 7 77*24565Szliu mulf3 $0f3.0,r8,r4 # beta = 3 * alpha 78*24565Szliu mulf2 r0,r4 # beta = Q * beta 79*24565Szliu addf2 r8,r4 # beta = alpha + beta 80*24565Szliu muld2 r6,r0 # S(X) = X * Q 81*24565Szliu# cvtfd r4,r4 ... r5 = 0 after a polyd. 82*24565Szliu addd2 r4,r0 # S(X) = beta + S(X) 83*24565Szliu addd2 r6,r0 # S(X) = X + S(X) 84*24565Szliu brb done 85*24565Szliucosine: 86*24565Szliu muld2 r6,r6 # Xsq = X * X 87*24565Szliu beql zero_arg 88*24565Szliu mulf2 r1,r8 # beta = X * alpha 89*24565Szliu polyd r6,$7,cos_coef # Q = P'(Xsq) , of deg 7 90*24565Szliu subd3 r0,r8,r0 # beta = beta - Q 91*24565Szliu subw2 $0x80,r6 # Xsq = Xsq / 2 92*24565Szliu addd2 r0,r6 # Xsq = Xsq + beta 93*24565Szliuzero_arg: 94*24565Szliu subd3 r6,$0d1.0,r0 # C(X) = 1 - Xsq 95*24565Szliudone: 96*24565Szliu blbc r9,even 97*24565Szliu mnegd r0,r0 98*24565Szliueven: 99*24565Szliu rsb 100*24565Szliu 101*24565Szliu.data 102*24565Szliu.align 2 103*24565Szliu 104*24565Szliusin_coef: 105*24565Szliu .double 0d-7.53080332264191085773e-13 # s7 = 2^-29 -1.a7f2504ffc49f8.. 106*24565Szliu .double 0d+1.60573519267703489121e-10 # s6 = 2^-21 1.611adaede473c8.. 107*24565Szliu .double 0d-2.50520965150706067211e-08 # s5 = 2^-1a -1.ae644921ed8382.. 108*24565Szliu .double 0d+2.75573191800593885716e-06 # s4 = 2^-13 1.71de3a4b884278.. 109*24565Szliu .double 0d-1.98412698411850507950e-04 # s3 = 2^-0d -1.a01a01a0125e7d.. 110*24565Szliu .double 0d+8.33333333333325688985e-03 # s2 = 2^-07 1.11111111110e50 111*24565Szliu .double 0d-1.66666666666666664354e-01 # s1 = 2^-03 -1.55555555555554 112*24565Szliu .double 0d+0.00000000000000000000e+00 # s0 = 0 113*24565Szliu 114*24565Szliucos_coef: 115*24565Szliu .double 0d-1.13006966202629430300e-11 # s7 = 2^-25 -1.8D9BA04D1374BE.. 116*24565Szliu .double 0d+2.08746646574796004700e-09 # s6 = 2^-1D 1.1EE632650350BA.. 117*24565Szliu .double 0d-2.75573073031284417300e-07 # s5 = 2^-16 -1.27E4F31411719E.. 118*24565Szliu .double 0d+2.48015872682668025200e-05 # s4 = 2^-10 1.A01A0196B902E8.. 119*24565Szliu .double 0d-1.38888888888464709200e-03 # s3 = 2^-0A -1.6C16C16C11FACE.. 120*24565Szliu .double 0d+4.16666666666664761400e-02 # s2 = 2^-05 1.5555555555539E 121*24565Szliu .double 0d+0.00000000000000000000e+00 # s1 = 0 122*24565Szliu .double 0d+0.00000000000000000000e+00 # s0 = 0 123*24565Szliu 124*24565Szliu# 125*24565Szliu# Multiples of pi/2 expressed as the sum of three doubles, 126*24565Szliu# 127*24565Szliu# trailing: n * pi/2 , n = 0, 1, 2, ..., 29 128*24565Szliu# trailing[n] , 129*24565Szliu# 130*24565Szliu# middle: n * pi/2 , n = 0, 1, 2, ..., 29 131*24565Szliu# middle[n] , 132*24565Szliu# 133*24565Szliu# leading: n * pi/2 , n = 0, 1, 2, ..., 29 134*24565Szliu# leading[n] , 135*24565Szliu# 136*24565Szliu# where 137*24565Szliu# leading[n] := (n * pi/2) rounded, 138*24565Szliu# middle[n] := (n * pi/2 - leading[n]) rounded, 139*24565Szliu# trailing[n] := (( n * pi/2 - leading[n]) - middle[n]) rounded . 140*24565Szliu 141*24565Szliutrailing: 142*24565Szliu .double 0d+0.00000000000000000000e+00 # 0 * pi/2 trailing 143*24565Szliu .double 0d+4.33590506506189049611e-35 # 1 * pi/2 trailing 144*24565Szliu .double 0d+8.67181013012378099223e-35 # 2 * pi/2 trailing 145*24565Szliu .double 0d+1.30077151951856714215e-34 # 3 * pi/2 trailing 146*24565Szliu .double 0d+1.73436202602475619845e-34 # 4 * pi/2 trailing 147*24565Szliu .double 0d-1.68390735624352669192e-34 # 5 * pi/2 trailing 148*24565Szliu .double 0d+2.60154303903713428430e-34 # 6 * pi/2 trailing 149*24565Szliu .double 0d-8.16726343231148352150e-35 # 7 * pi/2 trailing 150*24565Szliu .double 0d+3.46872405204951239689e-34 # 8 * pi/2 trailing 151*24565Szliu .double 0d+3.90231455855570147991e-34 # 9 * pi/2 trailing 152*24565Szliu .double 0d-3.36781471248705338384e-34 # 10 * pi/2 trailing 153*24565Szliu .double 0d-1.06379439835298071785e-33 # 11 * pi/2 trailing 154*24565Szliu .double 0d+5.20308607807426856861e-34 # 12 * pi/2 trailing 155*24565Szliu .double 0d+5.63667658458045770509e-34 # 13 * pi/2 trailing 156*24565Szliu .double 0d-1.63345268646229670430e-34 # 14 * pi/2 trailing 157*24565Szliu .double 0d-1.19986217995610764801e-34 # 15 * pi/2 trailing 158*24565Szliu .double 0d+6.93744810409902479378e-34 # 16 * pi/2 trailing 159*24565Szliu .double 0d-8.03640094449267300110e-34 # 17 * pi/2 trailing 160*24565Szliu .double 0d+7.80462911711140295982e-34 # 18 * pi/2 trailing 161*24565Szliu .double 0d-7.16921993148029483506e-34 # 19 * pi/2 trailing 162*24565Szliu .double 0d-6.73562942497410676769e-34 # 20 * pi/2 trailing 163*24565Szliu .double 0d-6.30203891846791677593e-34 # 21 * pi/2 trailing 164*24565Szliu .double 0d-2.12758879670596143570e-33 # 22 * pi/2 trailing 165*24565Szliu .double 0d+2.53800212047402350390e-33 # 23 * pi/2 trailing 166*24565Szliu .double 0d+1.04061721561485371372e-33 # 24 * pi/2 trailing 167*24565Szliu .double 0d+6.11729905311472319056e-32 # 25 * pi/2 trailing 168*24565Szliu .double 0d+1.12733531691609154102e-33 # 26 * pi/2 trailing 169*24565Szliu .double 0d-3.70049587943078297272e-34 # 27 * pi/2 trailing 170*24565Szliu .double 0d-3.26690537292459340860e-34 # 28 * pi/2 trailing 171*24565Szliu .double 0d-1.14812616507957271361e-34 # 29 * pi/2 trailing 172*24565Szliu 173*24565Szliumiddle: 174*24565Szliu .double 0d+0.00000000000000000000e+00 # 0 * pi/2 middle 175*24565Szliu .double 0d+5.72118872610983179676e-18 # 1 * pi/2 middle 176*24565Szliu .double 0d+1.14423774522196635935e-17 # 2 * pi/2 middle 177*24565Szliu .double 0d-3.83475850529283316309e-17 # 3 * pi/2 middle 178*24565Szliu .double 0d+2.28847549044393271871e-17 # 4 * pi/2 middle 179*24565Szliu .double 0d-2.69052076007086676522e-17 # 5 * pi/2 middle 180*24565Szliu .double 0d-7.66951701058566632618e-17 # 6 * pi/2 middle 181*24565Szliu .double 0d-1.54628301484890040587e-17 # 7 * pi/2 middle 182*24565Szliu .double 0d+4.57695098088786543741e-17 # 8 * pi/2 middle 183*24565Szliu .double 0d+1.07001849766246313192e-16 # 9 * pi/2 middle 184*24565Szliu .double 0d-5.38104152014173353044e-17 # 10 * pi/2 middle 185*24565Szliu .double 0d-2.14622680169080983801e-16 # 11 * pi/2 middle 186*24565Szliu .double 0d-1.53390340211713326524e-16 # 12 * pi/2 middle 187*24565Szliu .double 0d-9.21580002543456677056e-17 # 13 * pi/2 middle 188*24565Szliu .double 0d-3.09256602969780081173e-17 # 14 * pi/2 middle 189*24565Szliu .double 0d+3.03066796603896507006e-17 # 15 * pi/2 middle 190*24565Szliu .double 0d+9.15390196177573087482e-17 # 16 * pi/2 middle 191*24565Szliu .double 0d+1.52771359575124969107e-16 # 17 * pi/2 middle 192*24565Szliu .double 0d+2.14003699532492626384e-16 # 18 * pi/2 middle 193*24565Szliu .double 0d-1.68853170360202329427e-16 # 19 * pi/2 middle 194*24565Szliu .double 0d-1.07620830402834670609e-16 # 20 * pi/2 middle 195*24565Szliu .double 0d+3.97700719404595604379e-16 # 21 * pi/2 middle 196*24565Szliu .double 0d-4.29245360338161967602e-16 # 22 * pi/2 middle 197*24565Szliu .double 0d-3.68013020380794313406e-16 # 23 * pi/2 middle 198*24565Szliu .double 0d-3.06780680423426653047e-16 # 24 * pi/2 middle 199*24565Szliu .double 0d-2.45548340466059054318e-16 # 25 * pi/2 middle 200*24565Szliu .double 0d-1.84316000508691335411e-16 # 26 * pi/2 middle 201*24565Szliu .double 0d-1.23083660551323675053e-16 # 27 * pi/2 middle 202*24565Szliu .double 0d-6.18513205939560162346e-17 # 28 * pi/2 middle 203*24565Szliu .double 0d-6.18980636588357585202e-19 # 29 * pi/2 middle 204*24565Szliu 205*24565Szliuleading: 206*24565Szliu .double 0d+0.00000000000000000000e+00 # 0 * pi/2 leading 207*24565Szliu .double 0d+1.57079632679489661351e+00 # 1 * pi/2 leading 208*24565Szliu .double 0d+3.14159265358979322702e+00 # 2 * pi/2 leading 209*24565Szliu .double 0d+4.71238898038468989604e+00 # 3 * pi/2 leading 210*24565Szliu .double 0d+6.28318530717958645404e+00 # 4 * pi/2 leading 211*24565Szliu .double 0d+7.85398163397448312306e+00 # 5 * pi/2 leading 212*24565Szliu .double 0d+9.42477796076937979208e+00 # 6 * pi/2 leading 213*24565Szliu .double 0d+1.09955742875642763501e+01 # 7 * pi/2 leading 214*24565Szliu .double 0d+1.25663706143591729081e+01 # 8 * pi/2 leading 215*24565Szliu .double 0d+1.41371669411540694661e+01 # 9 * pi/2 leading 216*24565Szliu .double 0d+1.57079632679489662461e+01 # 10 * pi/2 leading 217*24565Szliu .double 0d+1.72787595947438630262e+01 # 11 * pi/2 leading 218*24565Szliu .double 0d+1.88495559215387595842e+01 # 12 * pi/2 leading 219*24565Szliu .double 0d+2.04203522483336561422e+01 # 13 * pi/2 leading 220*24565Szliu .double 0d+2.19911485751285527002e+01 # 14 * pi/2 leading 221*24565Szliu .double 0d+2.35619449019234492582e+01 # 15 * pi/2 leading 222*24565Szliu .double 0d+2.51327412287183458162e+01 # 16 * pi/2 leading 223*24565Szliu .double 0d+2.67035375555132423742e+01 # 17 * pi/2 leading 224*24565Szliu .double 0d+2.82743338823081389322e+01 # 18 * pi/2 leading 225*24565Szliu .double 0d+2.98451302091030359342e+01 # 19 * pi/2 leading 226*24565Szliu .double 0d+3.14159265358979324922e+01 # 20 * pi/2 leading 227*24565Szliu .double 0d+3.29867228626928286062e+01 # 21 * pi/2 leading 228*24565Szliu .double 0d+3.45575191894877260523e+01 # 22 * pi/2 leading 229*24565Szliu .double 0d+3.61283155162826226103e+01 # 23 * pi/2 leading 230*24565Szliu .double 0d+3.76991118430775191683e+01 # 24 * pi/2 leading 231*24565Szliu .double 0d+3.92699081698724157263e+01 # 25 * pi/2 leading 232*24565Szliu .double 0d+4.08407044966673122843e+01 # 26 * pi/2 leading 233*24565Szliu .double 0d+4.24115008234622088423e+01 # 27 * pi/2 leading 234*24565Szliu .double 0d+4.39822971502571054003e+01 # 28 * pi/2 leading 235*24565Szliu .double 0d+4.55530934770520019583e+01 # 29 * pi/2 leading 236*24565Szliu 237*24565SzliutwoOverPi: 238*24565Szliu .double 0d+6.36619772367581343076e-01 239*24565Szliu .text 240*24565Szliu .align 1 241*24565Szliu 242*24565Szliutable_lookup: 243*24565Szliu muld3 r3,twoOverPi,r0 244*24565Szliu cvtrdl r0,r0 # n = nearest int to ((2/pi)*|x|) rnded 245*24565Szliu mull3 $8,r0,r5 246*24565Szliu subd2 leading(r5),r3 # p = (|x| - leading n*pi/2) exactly 247*24565Szliu subd3 middle(r5),r3,r1 # q = (p - middle n*pi/2) rounded 248*24565Szliu subd2 r1,r3 # r = (p - q) 249*24565Szliu subd2 middle(r5),r3 # r = r - middle n*pi/2 250*24565Szliu subd2 trailing(r5),r3 # r = r - trailing n*pi/2 rounded 251*24565Szliu# 252*24565Szliu# If the original argument was negative, 253*24565Szliu# negate the reduce argument and 254*24565Szliu# adjust the octant/quadrant number. 255*24565Szliu# 256*24565Szliu tstw 4(ap) 257*24565Szliu bgeq abs2 258*24565Szliu mnegf r1,r1 259*24565Szliu mnegf r3,r3 260*24565Szliu# subb3 r0,$8,r0 ...used for pi/4 reduction -S.McD 261*24565Szliu subb3 r0,$4,r0 262*24565Szliuabs2: 263*24565Szliu# 264*24565Szliu# Clear all unneeded octant/quadrant bits. 265*24565Szliu# 266*24565Szliu# bicb2 $0xf8,r0 ...used for pi/4 reduction -S.McD 267*24565Szliu bicb2 $0xfc,r0 268*24565Szliu rsb 269*24565Szliu# 270*24565Szliu# p.0 271*24565Szliu .text 272*24565Szliu .align 2 273*24565Szliu# 274*24565Szliu# Only 256 (actually 225) bits of 2/pi are needed for VAX double 275*24565Szliu# precision; this was determined by enumerating all the nearest 276*24565Szliu# machine integer multiples of pi/2 using continued fractions. 277*24565Szliu# (8a8d3673775b7ff7 required the most bits.) -S.McD 278*24565Szliu# 279*24565Szliu .long 0 280*24565Szliu .long 0 281*24565Szliu .long 0xaef1586d 282*24565Szliu .long 0x9458eaf7 283*24565Szliu .long 0x10e4107f 284*24565Szliu .long 0xd8a5664f 285*24565Szliu .long 0x4d377036 286*24565Szliu .long 0x09d5f47d 287*24565Szliu .long 0x91054a7f 288*24565Szliu .long 0xbe60db93 289*24565Szliubits2opi: 290*24565Szliu .long 0x00000028 291*24565Szliu .long 0 292*24565Szliu# 293*24565Szliu# Note: wherever you see the word `octant', read `quadrant'. 294*24565Szliu# Currently this code is set up for pi/2 argument reduction. 295*24565Szliu# By uncommenting/commenting the appropriate lines, it will 296*24565Szliu# also serve as a pi/4 argument reduction code. 297*24565Szliu# 298*24565Szliu 299*24565Szliu# p.1 300*24565Szliu# Trigred preforms argument reduction 301*24565Szliu# for the trigonometric functions. It 302*24565Szliu# takes one input argument, a D-format 303*24565Szliu# number in r1/r0 . The magnitude of 304*24565Szliu# the input argument must be greater 305*24565Szliu# than or equal to 1/2 . Trigred produces 306*24565Szliu# three results: the number of the octant 307*24565Szliu# occupied by the argument, the reduced 308*24565Szliu# argument, and an extension of the 309*24565Szliu# reduced argument. The octant number is 310*24565Szliu# returned in r0 . The reduced argument 311*24565Szliu# is returned as a D-format number in 312*24565Szliu# r2/r1 . An 8 bit extension of the 313*24565Szliu# reduced argument is returned as an 314*24565Szliu# F-format number in r3. 315*24565Szliu# p.2 316*24565Szliutrigred: 317*24565Szliu# 318*24565Szliu# Save the sign of the input argument. 319*24565Szliu# 320*24565Szliu movw r0,-(sp) 321*24565Szliu# 322*24565Szliu# Extract the exponent field. 323*24565Szliu# 324*24565Szliu extzv $7,$7,r0,r2 325*24565Szliu# 326*24565Szliu# Convert the fraction part of the input 327*24565Szliu# argument into a quadword integer. 328*24565Szliu# 329*24565Szliu bicw2 $0xff80,r0 330*24565Szliu bisb2 $0x80,r0 # -S.McD 331*24565Szliu rotl $16,r0,r0 332*24565Szliu rotl $16,r1,r1 333*24565Szliu# 334*24565Szliu# If r1 is negative, add 1 to r0 . This 335*24565Szliu# adjustment is made so that the two's 336*24565Szliu# complement multiplications done later 337*24565Szliu# will produce unsigned results. 338*24565Szliu# 339*24565Szliu bgeq posmid 340*24565Szliu incl r0 341*24565Szliuposmid: 342*24565Szliu# p.3 343*24565Szliu# 344*24565Szliu# Set r3 to the address of the first quadword 345*24565Szliu# used to obtain the needed portion of 2/pi . 346*24565Szliu# The address is longword aligned to ensure 347*24565Szliu# efficient access. 348*24565Szliu# 349*24565Szliu ashl $-3,r2,r3 350*24565Szliu bicb2 $3,r3 351*24565Szliu subl3 r3,$bits2opi,r3 352*24565Szliu# 353*24565Szliu# Set r2 to the size of the shift needed to 354*24565Szliu# obtain the correct portion of 2/pi . 355*24565Szliu# 356*24565Szliu bicb2 $0xe0,r2 357*24565Szliu# p.4 358*24565Szliu# 359*24565Szliu# Move the needed 128 bits of 2/pi into 360*24565Szliu# r11 - r8 . Adjust the numbers to allow 361*24565Szliu# for unsigned multiplication. 362*24565Szliu# 363*24565Szliu ashq r2,(r3),r10 364*24565Szliu 365*24565Szliu subl2 $4,r3 366*24565Szliu ashq r2,(r3),r9 367*24565Szliu bgeq signoff1 368*24565Szliu incl r11 369*24565Szliusignoff1: 370*24565Szliu subl2 $4,r3 371*24565Szliu ashq r2,(r3),r8 372*24565Szliu bgeq signoff2 373*24565Szliu incl r10 374*24565Szliusignoff2: 375*24565Szliu subl2 $4,r3 376*24565Szliu ashq r2,(r3),r7 377*24565Szliu bgeq signoff3 378*24565Szliu incl r9 379*24565Szliusignoff3: 380*24565Szliu# p.5 381*24565Szliu# 382*24565Szliu# Multiply the contents of r0/r1 by the 383*24565Szliu# slice of 2/pi in r11 - r8 . 384*24565Szliu# 385*24565Szliu emul r0,r8,$0,r4 386*24565Szliu emul r0,r9,r5,r5 387*24565Szliu emul r0,r10,r6,r6 388*24565Szliu 389*24565Szliu emul r1,r8,$0,r7 390*24565Szliu emul r1,r9,r8,r8 391*24565Szliu emul r1,r10,r9,r9 392*24565Szliu emul r1,r11,r10,r10 393*24565Szliu 394*24565Szliu addl2 r4,r8 395*24565Szliu adwc r5,r9 396*24565Szliu adwc r6,r10 397*24565Szliu# p.6 398*24565Szliu# 399*24565Szliu# If there are more than five leading zeros 400*24565Szliu# after the first two quotient bits or if there 401*24565Szliu# are more than five leading ones after the first 402*24565Szliu# two quotient bits, generate more fraction bits. 403*24565Szliu# Otherwise, branch to code to produce the result. 404*24565Szliu# 405*24565Szliu bicl3 $0xc1ffffff,r10,r4 406*24565Szliu beql more1 407*24565Szliu cmpl $0x3e000000,r4 408*24565Szliu bneq result 409*24565Szliumore1: 410*24565Szliu# p.7 411*24565Szliu# 412*24565Szliu# generate another 32 result bits. 413*24565Szliu# 414*24565Szliu subl2 $4,r3 415*24565Szliu ashq r2,(r3),r5 416*24565Szliu bgeq signoff4 417*24565Szliu 418*24565Szliu emul r1,r6,$0,r4 419*24565Szliu addl2 r1,r5 420*24565Szliu emul r0,r6,r5,r5 421*24565Szliu addl2 r0,r6 422*24565Szliu brb addbits1 423*24565Szliu 424*24565Szliusignoff4: 425*24565Szliu emul r1,r6,$0,r4 426*24565Szliu emul r0,r6,r5,r5 427*24565Szliu 428*24565Szliuaddbits1: 429*24565Szliu addl2 r5,r7 430*24565Szliu adwc r6,r8 431*24565Szliu adwc $0,r9 432*24565Szliu adwc $0,r10 433*24565Szliu# p.8 434*24565Szliu# 435*24565Szliu# Check for massive cancellation. 436*24565Szliu# 437*24565Szliu bicl3 $0xc0000000,r10,r6 438*24565Szliu# bneq more2 -S.McD Test was backwards 439*24565Szliu beql more2 440*24565Szliu cmpl $0x3fffffff,r6 441*24565Szliu bneq result 442*24565Szliumore2: 443*24565Szliu# p.9 444*24565Szliu# 445*24565Szliu# If massive cancellation has occurred, 446*24565Szliu# generate another 24 result bits. 447*24565Szliu# Testing has shown there will always be 448*24565Szliu# enough bits after this point. 449*24565Szliu# 450*24565Szliu subl2 $4,r3 451*24565Szliu ashq r2,(r3),r5 452*24565Szliu bgeq signoff5 453*24565Szliu 454*24565Szliu emul r0,r6,r4,r5 455*24565Szliu addl2 r0,r6 456*24565Szliu brb addbits2 457*24565Szliu 458*24565Szliusignoff5: 459*24565Szliu emul r0,r6,r4,r5 460*24565Szliu 461*24565Szliuaddbits2: 462*24565Szliu addl2 r6,r7 463*24565Szliu adwc $0,r8 464*24565Szliu adwc $0,r9 465*24565Szliu adwc $0,r10 466*24565Szliu# p.10 467*24565Szliu# 468*24565Szliu# The following code produces the reduced 469*24565Szliu# argument from the product bits contained 470*24565Szliu# in r10 - r7 . 471*24565Szliu# 472*24565Szliuresult: 473*24565Szliu# 474*24565Szliu# Extract the octant number from r10 . 475*24565Szliu# 476*24565Szliu# extzv $29,$3,r10,r0 ...used for pi/4 reduction -S.McD 477*24565Szliu extzv $30,$2,r10,r0 478*24565Szliu# 479*24565Szliu# Clear the octant bits in r10 . 480*24565Szliu# 481*24565Szliu# bicl2 $0xe0000000,r10 ...used for pi/4 reduction -S.McD 482*24565Szliu bicl2 $0xc0000000,r10 483*24565Szliu# 484*24565Szliu# Zero the sign flag. 485*24565Szliu# 486*24565Szliu clrl r5 487*24565Szliu# p.11 488*24565Szliu# 489*24565Szliu# Check to see if the fraction is greater than 490*24565Szliu# or equal to one-half. If it is, add one 491*24565Szliu# to the octant number, set the sign flag 492*24565Szliu# on, and replace the fraction with 1 minus 493*24565Szliu# the fraction. 494*24565Szliu# 495*24565Szliu# bitl $0x10000000,r10 ...used for pi/4 reduction -S.McD 496*24565Szliu bitl $0x20000000,r10 497*24565Szliu beql small 498*24565Szliu incl r0 499*24565Szliu incl r5 500*24565Szliu# subl3 r10,$0x1fffffff,r10 ...used for pi/4 reduction -S.McD 501*24565Szliu subl3 r10,$0x3fffffff,r10 502*24565Szliu mcoml r9,r9 503*24565Szliu mcoml r8,r8 504*24565Szliu mcoml r7,r7 505*24565Szliusmall: 506*24565Szliu# p.12 507*24565Szliu# 508*24565Szliu## Test whether the first 29 bits of the ...used for pi/4 reduction -S.McD 509*24565Szliu# Test whether the first 30 bits of the 510*24565Szliu# fraction are zero. 511*24565Szliu# 512*24565Szliu tstl r10 513*24565Szliu beql tiny 514*24565Szliu# 515*24565Szliu# Find the position of the first one bit in r10 . 516*24565Szliu# 517*24565Szliu cvtld r10,r1 518*24565Szliu extzv $7,$7,r1,r1 519*24565Szliu# 520*24565Szliu# Compute the size of the shift needed. 521*24565Szliu# 522*24565Szliu subl3 r1,$32,r6 523*24565Szliu# 524*24565Szliu# Shift up the high order 64 bits of the 525*24565Szliu# product. 526*24565Szliu# 527*24565Szliu ashq r6,r9,r10 528*24565Szliu ashq r6,r8,r9 529*24565Szliu brb mult 530*24565Szliu# p.13 531*24565Szliu# 532*24565Szliu# Test to see if the sign bit of r9 is on. 533*24565Szliu# 534*24565Szliutiny: 535*24565Szliu tstl r9 536*24565Szliu bgeq tinier 537*24565Szliu# 538*24565Szliu# If it is, shift the product bits up 32 bits. 539*24565Szliu# 540*24565Szliu movl $32,r6 541*24565Szliu movq r8,r10 542*24565Szliu tstl r10 543*24565Szliu brb mult 544*24565Szliu# p.14 545*24565Szliu# 546*24565Szliu# Test whether r9 is zero. It is probably 547*24565Szliu# impossible for both r10 and r9 to be 548*24565Szliu# zero, but until proven to be so, the test 549*24565Szliu# must be made. 550*24565Szliu# 551*24565Szliutinier: 552*24565Szliu beql zero 553*24565Szliu# 554*24565Szliu# Find the position of the first one bit in r9 . 555*24565Szliu# 556*24565Szliu cvtld r9,r1 557*24565Szliu extzv $7,$7,r1,r1 558*24565Szliu# 559*24565Szliu# Compute the size of the shift needed. 560*24565Szliu# 561*24565Szliu subl3 r1,$32,r1 562*24565Szliu addl3 $32,r1,r6 563*24565Szliu# 564*24565Szliu# Shift up the high order 64 bits of the 565*24565Szliu# product. 566*24565Szliu# 567*24565Szliu ashq r1,r8,r10 568*24565Szliu ashq r1,r7,r9 569*24565Szliu brb mult 570*24565Szliu# p.15 571*24565Szliu# 572*24565Szliu# The following code sets the reduced 573*24565Szliu# argument to zero. 574*24565Szliu# 575*24565Szliuzero: 576*24565Szliu clrl r1 577*24565Szliu clrl r2 578*24565Szliu clrl r3 579*24565Szliu brw return 580*24565Szliu# p.16 581*24565Szliu# 582*24565Szliu# At this point, r0 contains the octant number, 583*24565Szliu# r6 indicates the number of bits the fraction 584*24565Szliu# has been shifted, r5 indicates the sign of 585*24565Szliu# the fraction, r11/r10 contain the high order 586*24565Szliu# 64 bits of the fraction, and the condition 587*24565Szliu# codes indicate where the sign bit of r10 588*24565Szliu# is on. The following code multiplies the 589*24565Szliu# fraction by pi/2 . 590*24565Szliu# 591*24565Szliumult: 592*24565Szliu# 593*24565Szliu# Save r11/r10 in r4/r1 . -S.McD 594*24565Szliu movl r11,r4 595*24565Szliu movl r10,r1 596*24565Szliu# 597*24565Szliu# If the sign bit of r10 is on, add 1 to r11 . 598*24565Szliu# 599*24565Szliu bgeq signoff6 600*24565Szliu incl r11 601*24565Szliusignoff6: 602*24565Szliu# p.17 603*24565Szliu# 604*24565Szliu# Move pi/2 into r3/r2 . 605*24565Szliu# 606*24565Szliu movq $0xc90fdaa22168c235,r2 607*24565Szliu# 608*24565Szliu# Multiply the fraction by the portion of pi/2 609*24565Szliu# in r2 . 610*24565Szliu# 611*24565Szliu emul r2,r10,$0,r7 612*24565Szliu emul r2,r11,r8,r7 613*24565Szliu# 614*24565Szliu# Multiply the fraction by the portion of pi/2 615*24565Szliu# in r3 . 616*24565Szliu emul r3,r10,$0,r9 617*24565Szliu emul r3,r11,r10,r10 618*24565Szliu# 619*24565Szliu# Add the product bits together. 620*24565Szliu# 621*24565Szliu addl2 r7,r9 622*24565Szliu adwc r8,r10 623*24565Szliu adwc $0,r11 624*24565Szliu# 625*24565Szliu# Compensate for not sign extending r8 above.-S.McD 626*24565Szliu# 627*24565Szliu tstl r8 628*24565Szliu bgeq signoff6a 629*24565Szliu decl r11 630*24565Szliusignoff6a: 631*24565Szliu# 632*24565Szliu# Compensate for r11/r10 being unsigned. -S.McD 633*24565Szliu# 634*24565Szliu addl2 r2,r10 635*24565Szliu adwc r3,r11 636*24565Szliu# 637*24565Szliu# Compensate for r3/r2 being unsigned. -S.McD 638*24565Szliu# 639*24565Szliu addl2 r1,r10 640*24565Szliu adwc r4,r11 641*24565Szliu# p.18 642*24565Szliu# 643*24565Szliu# If the sign bit of r11 is zero, shift the 644*24565Szliu# product bits up one bit and increment r6 . 645*24565Szliu# 646*24565Szliu blss signon 647*24565Szliu incl r6 648*24565Szliu ashq $1,r10,r10 649*24565Szliu tstl r9 650*24565Szliu bgeq signoff7 651*24565Szliu incl r10 652*24565Szliusignoff7: 653*24565Szliusignon: 654*24565Szliu# p.19 655*24565Szliu# 656*24565Szliu# Shift the 56 most significant product 657*24565Szliu# bits into r9/r8 . The sign extension 658*24565Szliu# will be handled later. 659*24565Szliu# 660*24565Szliu ashq $-8,r10,r8 661*24565Szliu# 662*24565Szliu# Convert the low order 8 bits of r10 663*24565Szliu# into an F-format number. 664*24565Szliu# 665*24565Szliu cvtbf r10,r3 666*24565Szliu# 667*24565Szliu# If the result of the conversion was 668*24565Szliu# negative, add 1 to r9/r8 . 669*24565Szliu# 670*24565Szliu bgeq chop 671*24565Szliu incl r8 672*24565Szliu adwc $0,r9 673*24565Szliu# 674*24565Szliu# If r9 is now zero, branch to special 675*24565Szliu# code to handle that possibility. 676*24565Szliu# 677*24565Szliu beql carryout 678*24565Szliuchop: 679*24565Szliu# p.20 680*24565Szliu# 681*24565Szliu# Convert the number in r9/r8 into 682*24565Szliu# D-format number in r2/r1 . 683*24565Szliu# 684*24565Szliu rotl $16,r8,r2 685*24565Szliu rotl $16,r9,r1 686*24565Szliu# 687*24565Szliu# Set the exponent field to the appropriate 688*24565Szliu# value. Note that the extra bits created by 689*24565Szliu# sign extension are now eliminated. 690*24565Szliu# 691*24565Szliu subw3 r6,$131,r6 692*24565Szliu insv r6,$7,$9,r1 693*24565Szliu# 694*24565Szliu# Set the exponent field of the F-format 695*24565Szliu# number in r3 to the appropriate value. 696*24565Szliu# 697*24565Szliu tstf r3 698*24565Szliu beql return 699*24565Szliu# extzv $7,$8,r3,r4 -S.McD 700*24565Szliu extzv $7,$7,r3,r4 701*24565Szliu addw2 r4,r6 702*24565Szliu# subw2 $217,r6 -S.McD 703*24565Szliu subw2 $64,r6 704*24565Szliu insv r6,$7,$8,r3 705*24565Szliu brb return 706*24565Szliu# p.21 707*24565Szliu# 708*24565Szliu# The following code generates the appropriate 709*24565Szliu# result for the unlikely possibility that 710*24565Szliu# rounding the number in r9/r8 resulted in 711*24565Szliu# a carry out. 712*24565Szliu# 713*24565Szliucarryout: 714*24565Szliu clrl r1 715*24565Szliu clrl r2 716*24565Szliu subw3 r6,$132,r6 717*24565Szliu insv r6,$7,$9,r1 718*24565Szliu tstf r3 719*24565Szliu beql return 720*24565Szliu extzv $7,$8,r3,r4 721*24565Szliu addw2 r4,r6 722*24565Szliu subw2 $218,r6 723*24565Szliu insv r6,$7,$8,r3 724*24565Szliu# p.22 725*24565Szliu# 726*24565Szliu# The following code makes an needed 727*24565Szliu# adjustments to the signs of the 728*24565Szliu# results or to the octant number, and 729*24565Szliu# then returns. 730*24565Szliu# 731*24565Szliureturn: 732*24565Szliu# 733*24565Szliu# Test if the fraction was greater than or 734*24565Szliu# equal to 1/2 . If so, negate the reduced 735*24565Szliu# argument. 736*24565Szliu# 737*24565Szliu blbc r5,signoff8 738*24565Szliu mnegf r1,r1 739*24565Szliu mnegf r3,r3 740*24565Szliusignoff8: 741*24565Szliu# p.23 742*24565Szliu# 743*24565Szliu# If the original argument was negative, 744*24565Szliu# negate the reduce argument and 745*24565Szliu# adjust the octant number. 746*24565Szliu# 747*24565Szliu tstw (sp)+ 748*24565Szliu bgeq signoff9 749*24565Szliu mnegf r1,r1 750*24565Szliu mnegf r3,r3 751*24565Szliu# subb3 r0,$8,r0 ...used for pi/4 reduction -S.McD 752*24565Szliu subb3 r0,$4,r0 753*24565Szliusignoff9: 754*24565Szliu# 755*24565Szliu# Clear all unneeded octant bits. 756*24565Szliu# 757*24565Szliu# bicb2 $0xf8,r0 ...used for pi/4 reduction -S.McD 758*24565Szliu bicb2 $0xfc,r0 759*24565Szliu# 760*24565Szliu# Return. 761*24565Szliu# 762*24565Szliu rsb 763