Skip to content

[mlir][NFC] update flang/Optimizer/Transforms create APIs (11/n) #149915

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 21, 2025

Conversation

makslevental
Copy link
Contributor

See #147168 for more info.

Copy link

github-actions bot commented Jul 21, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

@makslevental makslevental force-pushed the makslevental/update-create-11n branch from c46be1a to 39c60bf Compare July 21, 2025 22:02
@makslevental makslevental marked this pull request as ready for review July 21, 2025 22:37
@makslevental makslevental requested review from clementval, jeanPerier and kazutakahirata and removed request for jeanPerier July 21, 2025 22:37
@llvmbot llvmbot added flang Flang issues not falling into any other category flang:fir-hlfir labels Jul 21, 2025
@llvmbot
Copy link
Member

llvmbot commented Jul 21, 2025

@llvm/pr-subscribers-flang-fir-hlfir

Author: Maksim Levental (makslevental)

Changes

See #147168 for more info.


Patch is 127.87 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/149915.diff

22 Files Affected:

  • (modified) flang/lib/Optimizer/Transforms/AbstractResult.cpp (+14-14)
  • (modified) flang/lib/Optimizer/Transforms/AffineDemotion.cpp (+7-5)
  • (modified) flang/lib/Optimizer/Transforms/AffinePromotion.cpp (+21-19)
  • (modified) flang/lib/Optimizer/Transforms/ArrayValueCopy.cpp (+32-31)
  • (modified) flang/lib/Optimizer/Transforms/AssumedRankOpConversion.cpp (+3-3)
  • (modified) flang/lib/Optimizer/Transforms/CUFAddConstructor.cpp (+16-14)
  • (modified) flang/lib/Optimizer/Transforms/CUFComputeSharedMemoryOffsetsAndSize.cpp (+7-6)
  • (modified) flang/lib/Optimizer/Transforms/CUFGPUToLLVMConversion.cpp (+38-37)
  • (modified) flang/lib/Optimizer/Transforms/CUFOpConversion.cpp (+64-59)
  • (modified) flang/lib/Optimizer/Transforms/CharacterConversion.cpp (+17-15)
  • (modified) flang/lib/Optimizer/Transforms/ConstantArgumentGlobalisation.cpp (+8-8)
  • (modified) flang/lib/Optimizer/Transforms/ControlFlowConverter.cpp (+48-47)
  • (modified) flang/lib/Optimizer/Transforms/DebugTypeGenerator.cpp (+3-2)
  • (modified) flang/lib/Optimizer/Transforms/FIRToSCF.cpp (+9-9)
  • (modified) flang/lib/Optimizer/Transforms/GenRuntimeCallsForTest.cpp (+2-2)
  • (modified) flang/lib/Optimizer/Transforms/LoopVersioning.cpp (+22-22)
  • (modified) flang/lib/Optimizer/Transforms/MemoryAllocation.cpp (+4-4)
  • (modified) flang/lib/Optimizer/Transforms/MemoryUtils.cpp (+12-12)
  • (modified) flang/lib/Optimizer/Transforms/PolymorphicOpConversion.cpp (+46-42)
  • (modified) flang/lib/Optimizer/Transforms/SimplifyFIROperations.cpp (+10-10)
  • (modified) flang/lib/Optimizer/Transforms/SimplifyIntrinsics.cpp (+130-126)
  • (modified) flang/lib/Optimizer/Transforms/StackArrays.cpp (+4-4)
diff --git a/flang/lib/Optimizer/Transforms/AbstractResult.cpp b/flang/lib/Optimizer/Transforms/AbstractResult.cpp
index 59e2eeb76c715..391cfe3ceb9a2 100644
--- a/flang/lib/Optimizer/Transforms/AbstractResult.cpp
+++ b/flang/lib/Optimizer/Transforms/AbstractResult.cpp
@@ -137,9 +137,9 @@ class CallConversion : public mlir::OpRewritePattern<Op> {
     auto buffer = saveResult.getMemref();
     mlir::Value arg = buffer;
     if (mustEmboxResult(result.getType(), shouldBoxResult))
-      arg = rewriter.create<fir::EmboxOp>(
-          loc, argType, buffer, saveResult.getShape(), /*slice*/ mlir::Value{},
-          saveResult.getTypeparams());
+      arg = fir::EmboxOp::create(rewriter, loc, argType, buffer,
+                                 saveResult.getShape(), /*slice*/ mlir::Value{},
+                                 saveResult.getTypeparams());
 
     llvm::SmallVector<mlir::Type> newResultTypes;
     bool isResultBuiltinCPtr = fir::isa_builtin_cptr_type(result.getType());
@@ -155,8 +155,8 @@ class CallConversion : public mlir::OpRewritePattern<Op> {
         if (!isResultBuiltinCPtr)
           newOperands.emplace_back(arg);
         newOperands.append(op.getOperands().begin(), op.getOperands().end());
-        newOp = rewriter.create<fir::CallOp>(loc, *op.getCallee(),
-                                             newResultTypes, newOperands);
+        newOp = fir::CallOp::create(rewriter, loc, *op.getCallee(),
+                                    newResultTypes, newOperands);
       } else {
         // Indirect calls.
         llvm::SmallVector<mlir::Type> newInputTypes;
@@ -169,13 +169,13 @@ class CallConversion : public mlir::OpRewritePattern<Op> {
 
         llvm::SmallVector<mlir::Value> newOperands;
         newOperands.push_back(
-            rewriter.create<fir::ConvertOp>(loc, newFuncTy, op.getOperand(0)));
+            fir::ConvertOp::create(rewriter, loc, newFuncTy, op.getOperand(0)));
         if (!isResultBuiltinCPtr)
           newOperands.push_back(arg);
         newOperands.append(op.getOperands().begin() + 1,
                            op.getOperands().end());
-        newOp = rewriter.create<fir::CallOp>(loc, mlir::SymbolRefAttr{},
-                                             newResultTypes, newOperands);
+        newOp = fir::CallOp::create(rewriter, loc, mlir::SymbolRefAttr{},
+                                    newResultTypes, newOperands);
       }
     }
 
@@ -191,8 +191,8 @@ class CallConversion : public mlir::OpRewritePattern<Op> {
         passArgPos =
             rewriter.getI32IntegerAttr(*op.getPassArgPos() + passArgShift);
       // TODO: propagate argument and result attributes (need to be shifted).
-      newOp = rewriter.create<fir::DispatchOp>(
-          loc, newResultTypes, rewriter.getStringAttr(op.getMethod()),
+      newOp = fir::DispatchOp::create(
+          rewriter, loc, newResultTypes, rewriter.getStringAttr(op.getMethod()),
           op.getOperands()[0], newOperands, passArgPos,
           /*arg_attrs=*/nullptr, /*res_attrs=*/nullptr,
           op.getProcedureAttrsAttr());
@@ -280,7 +280,7 @@ processReturnLikeOp(OpTy ret, mlir::Value newArg,
     // register pass, this is possible for fir.box results, or fir.record
     // with no length parameters. Simply store the result in the result
     // storage. at the return point.
-    rewriter.create<fir::StoreOp>(loc, resultValue, newArg);
+    fir::StoreOp::create(rewriter, loc, resultValue, newArg);
     rewriter.replaceOpWithNewOp<OpTy>(ret);
   }
   // Delete result old local storage if unused.
@@ -337,8 +337,8 @@ class AddrOfOpConversion : public mlir::OpRewritePattern<fir::AddrOfOp> {
       newFuncTy = getCPtrFunctionType(oldFuncTy);
     else
       newFuncTy = getNewFunctionType(oldFuncTy, shouldBoxResult);
-    auto newAddrOf = rewriter.create<fir::AddrOfOp>(addrOf.getLoc(), newFuncTy,
-                                                    addrOf.getSymbol());
+    auto newAddrOf = fir::AddrOfOp::create(rewriter, addrOf.getLoc(), newFuncTy,
+                                           addrOf.getSymbol());
     // Rather than converting all op a function pointer might transit through
     // (e.g calls, stores, loads, converts...), cast new type to the abstract
     // type. A conversion will be added when calling indirect calls of abstract
@@ -397,7 +397,7 @@ class AbstractResultOpt
         if (mustEmboxResult(resultType, shouldBoxResult)) {
           auto bufferType = fir::ReferenceType::get(resultType);
           rewriter.setInsertionPointToStart(&func.front());
-          newArg = rewriter.create<fir::BoxAddrOp>(loc, bufferType, newArg);
+          newArg = fir::BoxAddrOp::create(rewriter, loc, bufferType, newArg);
         }
         patterns.insert<ReturnOpConversion>(context, newArg);
         target.addDynamicallyLegalOp<mlir::func::ReturnOp>(
diff --git a/flang/lib/Optimizer/Transforms/AffineDemotion.cpp b/flang/lib/Optimizer/Transforms/AffineDemotion.cpp
index d45f855c9078e..f1c66a5bbcf8c 100644
--- a/flang/lib/Optimizer/Transforms/AffineDemotion.cpp
+++ b/flang/lib/Optimizer/Transforms/AffineDemotion.cpp
@@ -60,9 +60,10 @@ class AffineLoadConversion
     if (!maybeExpandedMap)
       return failure();
 
-    auto coorOp = rewriter.create<fir::CoordinateOp>(
-        op.getLoc(), fir::ReferenceType::get(op.getResult().getType()),
-        adaptor.getMemref(), *maybeExpandedMap);
+    auto coorOp = fir::CoordinateOp::create(
+        rewriter, op.getLoc(),
+        fir::ReferenceType::get(op.getResult().getType()), adaptor.getMemref(),
+        *maybeExpandedMap);
 
     rewriter.replaceOpWithNewOp<fir::LoadOp>(op, coorOp.getResult());
     return success();
@@ -83,8 +84,9 @@ class AffineStoreConversion
     if (!maybeExpandedMap)
       return failure();
 
-    auto coorOp = rewriter.create<fir::CoordinateOp>(
-        op.getLoc(), fir::ReferenceType::get(op.getValueToStore().getType()),
+    auto coorOp = fir::CoordinateOp::create(
+        rewriter, op.getLoc(),
+        fir::ReferenceType::get(op.getValueToStore().getType()),
         adaptor.getMemref(), *maybeExpandedMap);
     rewriter.replaceOpWithNewOp<fir::StoreOp>(op, adaptor.getValue(),
                                               coorOp.getResult());
diff --git a/flang/lib/Optimizer/Transforms/AffinePromotion.cpp b/flang/lib/Optimizer/Transforms/AffinePromotion.cpp
index ef82e400bea14..b032767eef6f0 100644
--- a/flang/lib/Optimizer/Transforms/AffinePromotion.cpp
+++ b/flang/lib/Optimizer/Transforms/AffinePromotion.cpp
@@ -366,8 +366,9 @@ static mlir::Type coordinateArrayElement(fir::ArrayCoorOp op) {
 static void populateIndexArgs(fir::ArrayCoorOp acoOp, fir::ShapeOp shape,
                               SmallVectorImpl<mlir::Value> &indexArgs,
                               mlir::PatternRewriter &rewriter) {
-  auto one = rewriter.create<mlir::arith::ConstantOp>(
-      acoOp.getLoc(), rewriter.getIndexType(), rewriter.getIndexAttr(1));
+  auto one = mlir::arith::ConstantOp::create(rewriter, acoOp.getLoc(),
+                                             rewriter.getIndexType(),
+                                             rewriter.getIndexAttr(1));
   auto extents = shape.getExtents();
   for (auto i = extents.begin(); i < extents.end(); i++) {
     indexArgs.push_back(one);
@@ -379,8 +380,9 @@ static void populateIndexArgs(fir::ArrayCoorOp acoOp, fir::ShapeOp shape,
 static void populateIndexArgs(fir::ArrayCoorOp acoOp, fir::ShapeShiftOp shape,
                               SmallVectorImpl<mlir::Value> &indexArgs,
                               mlir::PatternRewriter &rewriter) {
-  auto one = rewriter.create<mlir::arith::ConstantOp>(
-      acoOp.getLoc(), rewriter.getIndexType(), rewriter.getIndexAttr(1));
+  auto one = mlir::arith::ConstantOp::create(rewriter, acoOp.getLoc(),
+                                             rewriter.getIndexType(),
+                                             rewriter.getIndexAttr(1));
   auto extents = shape.getPairs();
   for (auto i = extents.begin(); i < extents.end();) {
     indexArgs.push_back(*i++);
@@ -422,13 +424,13 @@ createAffineOps(mlir::Value arrayRef, mlir::PatternRewriter &rewriter) {
 
   populateIndexArgs(acoOp, indexArgs, rewriter);
 
-  auto affineApply = rewriter.create<affine::AffineApplyOp>(
-      acoOp.getLoc(), affineMap, indexArgs);
+  auto affineApply = affine::AffineApplyOp::create(rewriter, acoOp.getLoc(),
+                                                   affineMap, indexArgs);
   auto arrayElementType = coordinateArrayElement(acoOp);
   auto newType =
       mlir::MemRefType::get({mlir::ShapedType::kDynamic}, arrayElementType);
-  auto arrayConvert = rewriter.create<fir::ConvertOp>(acoOp.getLoc(), newType,
-                                                      acoOp.getMemref());
+  auto arrayConvert = fir::ConvertOp::create(rewriter, acoOp.getLoc(), newType,
+                                             acoOp.getMemref());
   return std::make_pair(affineApply, arrayConvert);
 }
 
@@ -495,7 +497,7 @@ class AffineLoopConversion : public mlir::OpRewritePattern<fir::DoLoopOp> {
                                 affineFor.getRegionIterArgs());
     if (!results.empty()) {
       rewriter.setInsertionPointToEnd(affineFor.getBody());
-      rewriter.create<affine::AffineYieldOp>(resultOp->getLoc(), results);
+      affine::AffineYieldOp::create(rewriter, resultOp->getLoc(), results);
     }
     rewriter.finalizeOpModification(affineFor.getOperation());
 
@@ -525,8 +527,8 @@ class AffineLoopConversion : public mlir::OpRewritePattern<fir::DoLoopOp> {
   std::pair<affine::AffineForOp, mlir::Value>
   positiveConstantStep(fir::DoLoopOp op, int64_t step,
                        mlir::PatternRewriter &rewriter) const {
-    auto affineFor = rewriter.create<affine::AffineForOp>(
-        op.getLoc(), ValueRange(op.getLowerBound()),
+    auto affineFor = affine::AffineForOp::create(
+        rewriter, op.getLoc(), ValueRange(op.getLowerBound()),
         mlir::AffineMap::get(0, 1,
                              mlir::getAffineSymbolExpr(0, op.getContext())),
         ValueRange(op.getUpperBound()),
@@ -543,24 +545,24 @@ class AffineLoopConversion : public mlir::OpRewritePattern<fir::DoLoopOp> {
     auto step = mlir::getAffineSymbolExpr(2, op.getContext());
     mlir::AffineMap upperBoundMap = mlir::AffineMap::get(
         0, 3, (upperBound - lowerBound + step).floorDiv(step));
-    auto genericUpperBound = rewriter.create<affine::AffineApplyOp>(
-        op.getLoc(), upperBoundMap,
+    auto genericUpperBound = affine::AffineApplyOp::create(
+        rewriter, op.getLoc(), upperBoundMap,
         ValueRange({op.getLowerBound(), op.getUpperBound(), op.getStep()}));
     auto actualIndexMap = mlir::AffineMap::get(
         1, 2,
         (lowerBound + mlir::getAffineDimExpr(0, op.getContext())) *
             mlir::getAffineSymbolExpr(1, op.getContext()));
 
-    auto affineFor = rewriter.create<affine::AffineForOp>(
-        op.getLoc(), ValueRange(),
+    auto affineFor = affine::AffineForOp::create(
+        rewriter, op.getLoc(), ValueRange(),
         AffineMap::getConstantMap(0, op.getContext()),
         genericUpperBound.getResult(),
         mlir::AffineMap::get(0, 1,
                              1 + mlir::getAffineSymbolExpr(0, op.getContext())),
         1, op.getIterOperands());
     rewriter.setInsertionPointToStart(affineFor.getBody());
-    auto actualIndex = rewriter.create<affine::AffineApplyOp>(
-        op.getLoc(), actualIndexMap,
+    auto actualIndex = affine::AffineApplyOp::create(
+        rewriter, op.getLoc(), actualIndexMap,
         ValueRange(
             {affineFor.getInductionVar(), op.getLowerBound(), op.getStep()}));
     return std::make_pair(affineFor, actualIndex.getResult());
@@ -588,8 +590,8 @@ class AffineIfConversion : public mlir::OpRewritePattern<fir::IfOp> {
               << "AffineIfConversion: couldn't calculate affine condition\n";);
       return failure();
     }
-    auto affineIf = rewriter.create<affine::AffineIfOp>(
-        op.getLoc(), affineCondition.getIntegerSet(),
+    auto affineIf = affine::AffineIfOp::create(
+        rewriter, op.getLoc(), affineCondition.getIntegerSet(),
         affineCondition.getAffineArgs(), !op.getElseRegion().empty());
     rewriter.startOpModification(affineIf);
     affineIf.getThenBlock()->getOperations().splice(
diff --git a/flang/lib/Optimizer/Transforms/ArrayValueCopy.cpp b/flang/lib/Optimizer/Transforms/ArrayValueCopy.cpp
index 8544d17f62248..247ba953f3265 100644
--- a/flang/lib/Optimizer/Transforms/ArrayValueCopy.cpp
+++ b/flang/lib/Optimizer/Transforms/ArrayValueCopy.cpp
@@ -856,7 +856,7 @@ static bool getAdjustedExtents(mlir::Location loc,
   auto idxTy = rewriter.getIndexType();
   if (isAssumedSize(result)) {
     // Use slice information to compute the extent of the column.
-    auto one = rewriter.create<mlir::arith::ConstantIndexOp>(loc, 1);
+    auto one = mlir::arith::ConstantIndexOp::create(rewriter, loc, 1);
     mlir::Value size = one;
     if (mlir::Value sliceArg = arrLoad.getSlice()) {
       if (auto sliceOp =
@@ -896,14 +896,14 @@ static mlir::Value getOrReadExtentsAndShapeOp(
         mlir::cast<SequenceType>(dyn_cast_ptrOrBoxEleTy(boxTy)).getDimension();
     auto idxTy = rewriter.getIndexType();
     for (decltype(rank) dim = 0; dim < rank; ++dim) {
-      auto dimVal = rewriter.create<mlir::arith::ConstantIndexOp>(loc, dim);
-      auto dimInfo = rewriter.create<BoxDimsOp>(loc, idxTy, idxTy, idxTy,
-                                                arrLoad.getMemref(), dimVal);
+      auto dimVal = mlir::arith::ConstantIndexOp::create(rewriter, loc, dim);
+      auto dimInfo = BoxDimsOp::create(rewriter, loc, idxTy, idxTy, idxTy,
+                                       arrLoad.getMemref(), dimVal);
       result.emplace_back(dimInfo.getResult(1));
     }
     if (!arrLoad.getShape()) {
       auto shapeType = ShapeType::get(rewriter.getContext(), rank);
-      return rewriter.create<ShapeOp>(loc, shapeType, result);
+      return ShapeOp::create(rewriter, loc, shapeType, result);
     }
     auto shiftOp = arrLoad.getShape().getDefiningOp<ShiftOp>();
     auto shapeShiftType = ShapeShiftType::get(rewriter.getContext(), rank);
@@ -912,8 +912,8 @@ static mlir::Value getOrReadExtentsAndShapeOp(
       shapeShiftOperands.push_back(lb);
       shapeShiftOperands.push_back(extent);
     }
-    return rewriter.create<ShapeShiftOp>(loc, shapeShiftType,
-                                         shapeShiftOperands);
+    return ShapeShiftOp::create(rewriter, loc, shapeShiftType,
+                                shapeShiftOperands);
   }
   copyUsingSlice =
       getAdjustedExtents(loc, rewriter, arrLoad, result, arrLoad.getShape());
@@ -952,13 +952,13 @@ static mlir::Value genCoorOp(mlir::PatternRewriter &rewriter,
   auto module = load->getParentOfType<mlir::ModuleOp>();
   FirOpBuilder builder(rewriter, module);
   auto typeparams = getTypeParamsIfRawData(loc, builder, load, alloc.getType());
-  mlir::Value result = rewriter.create<ArrayCoorOp>(
-      loc, eleTy, alloc, shape, slice,
+  mlir::Value result = ArrayCoorOp::create(
+      rewriter, loc, eleTy, alloc, shape, slice,
       llvm::ArrayRef<mlir::Value>{originated}.take_front(dimension),
       typeparams);
   if (dimension < originated.size())
-    result = rewriter.create<fir::CoordinateOp>(
-        loc, resTy, result,
+    result = fir::CoordinateOp::create(
+        rewriter, loc, resTy, result,
         llvm::ArrayRef<mlir::Value>{originated}.drop_front(dimension));
   return result;
 }
@@ -971,13 +971,13 @@ static mlir::Value getCharacterLen(mlir::Location loc, FirOpBuilder &builder,
       // The loaded array is an emboxed value. Get the CHARACTER length from
       // the box value.
       auto eleSzInBytes =
-          builder.create<BoxEleSizeOp>(loc, charLenTy, load.getMemref());
+          BoxEleSizeOp::create(builder, loc, charLenTy, load.getMemref());
       auto kindSize =
           builder.getKindMap().getCharacterBitsize(charTy.getFKind());
       auto kindByteSize =
           builder.createIntegerConstant(loc, charLenTy, kindSize / 8);
-      return builder.create<mlir::arith::DivSIOp>(loc, eleSzInBytes,
-                                                  kindByteSize);
+      return mlir::arith::DivSIOp::create(builder, loc, eleSzInBytes,
+                                          kindByteSize);
     }
     // The loaded array is a (set of) unboxed values. If the CHARACTER's
     // length is not a constant, it must be provided as a type parameter to
@@ -1003,11 +1003,11 @@ void genArrayCopy(mlir::Location loc, mlir::PatternRewriter &rewriter,
   auto idxTy = rewriter.getIndexType();
   // Build loop nest from column to row.
   for (auto sh : llvm::reverse(extents)) {
-    auto ubi = rewriter.create<ConvertOp>(loc, idxTy, sh);
-    auto zero = rewriter.create<mlir::arith::ConstantIndexOp>(loc, 0);
-    auto one = rewriter.create<mlir::arith::ConstantIndexOp>(loc, 1);
-    auto ub = rewriter.create<mlir::arith::SubIOp>(loc, idxTy, ubi, one);
-    auto loop = rewriter.create<DoLoopOp>(loc, zero, ub, one);
+    auto ubi = ConvertOp::create(rewriter, loc, idxTy, sh);
+    auto zero = mlir::arith::ConstantIndexOp::create(rewriter, loc, 0);
+    auto one = mlir::arith::ConstantIndexOp::create(rewriter, loc, 1);
+    auto ub = mlir::arith::SubIOp::create(rewriter, loc, idxTy, ubi, one);
+    auto loop = DoLoopOp::create(rewriter, loc, zero, ub, one);
     rewriter.setInsertionPointToStart(loop.getBody());
     indices.push_back(loop.getInductionVar());
   }
@@ -1015,13 +1015,13 @@ void genArrayCopy(mlir::Location loc, mlir::PatternRewriter &rewriter,
   std::reverse(indices.begin(), indices.end());
   auto module = arrLoad->getParentOfType<mlir::ModuleOp>();
   FirOpBuilder builder(rewriter, module);
-  auto fromAddr = rewriter.create<ArrayCoorOp>(
-      loc, getEleTy(src.getType()), src, shapeOp,
+  auto fromAddr = ArrayCoorOp::create(
+      rewriter, loc, getEleTy(src.getType()), src, shapeOp,
       CopyIn && copyUsingSlice ? sliceOp : mlir::Value{},
       factory::originateIndices(loc, rewriter, src.getType(), shapeOp, indices),
       getTypeParamsIfRawData(loc, builder, arrLoad, src.getType()));
-  auto toAddr = rewriter.create<ArrayCoorOp>(
-      loc, getEleTy(dst.getType()), dst, shapeOp,
+  auto toAddr = ArrayCoorOp::create(
+      rewriter, loc, getEleTy(dst.getType()), dst, shapeOp,
       !CopyIn && copyUsingSlice ? sliceOp : mlir::Value{},
       factory::originateIndices(loc, rewriter, dst.getType(), shapeOp, indices),
       getTypeParamsIfRawData(loc, builder, arrLoad, dst.getType()));
@@ -1093,15 +1093,16 @@ allocateArrayTemp(mlir::Location loc, mlir::PatternRewriter &rewriter,
       findNonconstantExtents(baseType, extents);
   llvm::SmallVector<mlir::Value> typeParams =
       genArrayLoadTypeParameters(loc, rewriter, load);
-  mlir::Value allocmem = rewriter.create<AllocMemOp>(
-      loc, dyn_cast_ptrOrBoxEleTy(baseType), typeParams, nonconstantExtents);
+  mlir::Value allocmem =
+      AllocMemOp::create(rewriter, loc, dyn_cast_ptrOrBoxEleTy(baseType),
+                         typeParams, nonconstantExtents);
   mlir::Type eleType =
       fir::unwrapSequenceType(fir::unwrapPassByRefType(baseType));
   if (fir::isRecordWithAllocatableMember(eleType)) {
     // The allocatable component descriptors need to be set to a clean
     // deallocated status before anything is done with them.
-    mlir::Value box = rewriter.create<fir::EmboxOp>(
-        loc, fir::BoxType::get(allocmem.getType()), allocmem, shape,
+    mlir::Value box = fir::EmboxOp::create(
+        rewriter, loc, fir::BoxType::get(allocmem.getType()), allocmem, shape,
         /*slice=*/mlir::Value{}, typeParams);
     auto module = load->getParentOfType<mlir::ModuleOp>();
     FirOpBuilder builder(rewriter, module);
@@ -1111,12 +1112,12 @@ allocateArrayTemp(mlir::Location loc, mlir::PatternRewriter &rewriter,
     auto cleanup = [=](mlir::PatternRewriter &r) {
       FirOpBuilder builder(r, module);
       runtime::genDerivedTypeDestroy(builder, loc, box);
-      r.create<FreeMemOp>(loc, allocmem);
+      FreeMemOp::create(r, loc, allocmem);
     };
     return {allocmem, cleanup};
   }
   auto cleanup = [=](mlir::PatternRewriter &r) {
-    r.create<FreeMemOp>(loc, allocmem);
+    FreeMemOp::create(r, loc, allocmem);
   };
   return {allocmem, cleanup};
 }
@@ -1257,7 +1258,7 @@ cl...
[truncated]

Copy link
Contributor

@clementval clementval left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@makslevental makslevental merged commit 46f6df0 into llvm:main Jul 21, 2025
13 checks passed
@makslevental makslevental deleted the makslevental/update-create-11n branch July 21, 2025 23:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
flang:fir-hlfir flang Flang issues not falling into any other category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants