Lines Matching full:matmul

18   %matmul = linalg.matmul ins(%lhs, %rhs: tensor<512x512xf32>, tensor<512x512xf32>)
23 ins(%matmul, %bias : tensor<512x512xf32>, tensor<512x512xf32>)
43 %arg1: !transform.op<"linalg.matmul">,
74 %arg1: !transform.op<"linalg.matmul">,
76 transform.debug.emit_remark_at %arg1, "matmul"
77 : !transform.op<"linalg.matmul">
92 debug-bind-trailing-args=linalg.matmul,linalg.elemwise_binary})"
95 …ciate the two extra arguments of the top-level sequence with all `linalg.matmul` and `linalg.elemw…
98 sequence.mlir:7:13: remark: matmul
99 %matmul = linalg.matmul ins(%lhs, %rhs: tensor<512x512xf32>, tensor<512x512xf32>)
101 sequence.mlir:7:13: note: see current operation: %0 = linalg.matmul ins(%arg0, %arg1 : tensor<512x5…
117 …rm, we are ready to apply the transformations. Let us first try tiling the matmul operation itself.
123 %arg1: !transform.op<"linalg.matmul">,
128 : (!transform.op<"linalg.matmul">)
157 %5 = linalg.matmul
187 %arg1: !transform.op<"linalg.matmul">,
191 : (!transform.op<"linalg.matmul">) -> (!transform.any_op, !transform.any_op)
194 transform.debug.emit_remark_at %arg1, "remark" : !transform.op<"linalg.matmul">
207 %mm = transform.cast %matmul : !transform.op<"linalg.matmul"> to !transform.any_op
221 %arg1: !transform.op<"linalg.matmul">,
225 %casted = transform.cast %arg1 : !transform.op<"linalg.matmul">
231 : (!transform.op<"linalg.matmul">)
248 transform.debug.emit_remark_at %matmul, "elemwise_binaries" : !transform.op<"linalg.matmul">
251 ^bb0(%root: !transform.any_op, %matmul: !transform.op<"linalg.matmul">, %elemwise: !transform.op<"l…
259 …heir sizes and inject recomputation if desired. So instead of tiling the matmul operation, we are …
265 %arg1: !transform.op<"linalg.matmul">,
292 : (!transform.op<"linalg.matmul">, !transform.any_op)
310 %arg1: !transform.op<"linalg.matmul">,
337 : (!transform.op<"linalg.matmul">, !transform.any_op)
341 // "add" operation and fuses matmul into the loop, but doesn't affect the
390 %matmul = linalg.matmul ins(%lhs, %rhs: tensor<512x512xf32>, tensor<512x512xf32>)