Skip to content

[VectorCombine] Remove dead node immediately in VectorCombine #149047

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

davemgreen
Copy link
Collaborator

@davemgreen davemgreen commented Jul 16, 2025

The vector combiner will process all instructions as it first loops through the function, adding any newly added and deleted instructions to a worklist which is then processed when all nodes are done. These leaves extra uses in the graph as the initial processing is performed, leading to sub-optimal decisions being made for other combines. This changes it so that trivially dead instructions are removed immediately. The main change that this requires is to make sure iterator invalidation does not occur.

@llvmbot
Copy link
Member

llvmbot commented Jul 16, 2025

@llvm/pr-subscribers-vectorizers
@llvm/pr-subscribers-llvm-transforms

@llvm/pr-subscribers-backend-risc-v

Author: David Green (davemgreen)

Changes

This tries to mirror how InstructionWorklist is used in InstCombine, adding the nodes initially to a list that is added in reverse order to the Worklist. The general order should be the same, the main advantage of this is that as node are initially processed, when altered the New and Old instructions are visited immediately, helping remove Old instructions as they are replaced, helping other combines work without hitting OneUse checks.


Patch is 55.44 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/149047.diff

13 Files Affected:

  • (modified) llvm/lib/Transforms/Vectorize/VectorCombine.cpp (+7-3)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/hadd.ll (+11-11)
  • (modified) llvm/test/Transforms/PhaseOrdering/X86/hsub.ll (+11-11)
  • (modified) llvm/test/Transforms/VectorCombine/AArch64/ext-extract.ll (+69-32)
  • (modified) llvm/test/Transforms/VectorCombine/AArch64/load-extractelement-scalarization.ll (+8-8)
  • (modified) llvm/test/Transforms/VectorCombine/AArch64/select-shuffle.ll (+15-31)
  • (modified) llvm/test/Transforms/VectorCombine/RISCV/load-widening.ll (+4-4)
  • (modified) llvm/test/Transforms/VectorCombine/X86/concat-boolmasks.ll (+15-49)
  • (modified) llvm/test/Transforms/VectorCombine/X86/extract-binop-inseltpoison.ll (+1-3)
  • (modified) llvm/test/Transforms/VectorCombine/X86/extract-binop.ll (+2-3)
  • (modified) llvm/test/Transforms/VectorCombine/X86/reduction-two-vecs-combine.ll (+8-8)
  • (modified) llvm/test/Transforms/VectorCombine/X86/select-shuffle.ll (+3-4)
  • (modified) llvm/test/Transforms/VectorCombine/pr88796.ll (+4-4)
diff --git a/llvm/lib/Transforms/Vectorize/VectorCombine.cpp b/llvm/lib/Transforms/Vectorize/VectorCombine.cpp
index fe8d74c43dfdc..3c7101c9f3c0d 100644
--- a/llvm/lib/Transforms/Vectorize/VectorCombine.cpp
+++ b/llvm/lib/Transforms/Vectorize/VectorCombine.cpp
@@ -3803,18 +3803,22 @@ bool VectorCombine::run() {
     }
   };
 
+  SmallVector<Instruction*, 128> InstrsForInstructionWorklist;
   for (BasicBlock &BB : F) {
     // Ignore unreachable basic blocks.
     if (!DT.isReachableFromEntry(&BB))
       continue;
-    // Use early increment range so that we can erase instructions in loop.
-    for (Instruction &I : make_early_inc_range(BB)) {
+    for (Instruction &I : BB) {
       if (I.isDebugOrPseudoInst())
         continue;
-      FoldInst(I);
+      InstrsForInstructionWorklist.push_back(&I);
     }
   }
 
+  Worklist.reserve(InstrsForInstructionWorklist.size());
+  for (auto I : reverse(InstrsForInstructionWorklist))
+    Worklist.push(I);
+
   while (!Worklist.isEmpty()) {
     Instruction *I = Worklist.removeOne();
     if (!I)
diff --git a/llvm/test/Transforms/PhaseOrdering/X86/hadd.ll b/llvm/test/Transforms/PhaseOrdering/X86/hadd.ll
index 798df4cd4ff54..f85d46689ccb0 100644
--- a/llvm/test/Transforms/PhaseOrdering/X86/hadd.ll
+++ b/llvm/test/Transforms/PhaseOrdering/X86/hadd.ll
@@ -121,12 +121,12 @@ define <8 x i16> @add_v8i16_u1234567(<8 x i16> %a, <8 x i16> %b) {
 
 define <8 x i16> @add_v8i16_76u43210(<8 x i16> %a, <8 x i16> %b) {
 ; SSE2-LABEL: @add_v8i16_76u43210(
-; SSE2-NEXT:    [[SHIFT:%.*]] = shufflevector <8 x i16> [[A:%.*]], <8 x i16> poison, <8 x i32> <i32 1, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
-; SSE2-NEXT:    [[TMP1:%.*]] = add <8 x i16> [[A]], [[SHIFT]]
 ; SSE2-NEXT:    [[SHIFT2:%.*]] = shufflevector <8 x i16> [[B:%.*]], <8 x i16> poison, <8 x i32> <i32 poison, i32 poison, i32 poison, i32 poison, i32 5, i32 poison, i32 poison, i32 poison>
 ; SSE2-NEXT:    [[TMP2:%.*]] = add <8 x i16> [[B]], [[SHIFT2]]
 ; SSE2-NEXT:    [[SHIFT3:%.*]] = shufflevector <8 x i16> [[B]], <8 x i16> poison, <8 x i32> <i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 6>
 ; SSE2-NEXT:    [[TMP3:%.*]] = add <8 x i16> [[SHIFT3]], [[B]]
+; SSE2-NEXT:    [[TMP7:%.*]] = shufflevector <8 x i16> [[A:%.*]], <8 x i16> poison, <8 x i32> <i32 1, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
+; SSE2-NEXT:    [[TMP1:%.*]] = add <8 x i16> [[A]], [[TMP7]]
 ; SSE2-NEXT:    [[TMP4:%.*]] = shufflevector <8 x i16> [[A]], <8 x i16> [[B]], <8 x i32> <i32 2, i32 4, i32 6, i32 8, i32 poison, i32 poison, i32 poison, i32 poison>
 ; SSE2-NEXT:    [[TMP5:%.*]] = shufflevector <8 x i16> [[A]], <8 x i16> [[B]], <8 x i32> <i32 3, i32 5, i32 7, i32 9, i32 poison, i32 poison, i32 poison, i32 poison>
 ; SSE2-NEXT:    [[TMP6:%.*]] = add <8 x i16> [[TMP4]], [[TMP5]]
@@ -404,13 +404,13 @@ define <16 x i16> @add_v16i16_FEuCBA98765432u0(<16 x i16> %a, <16 x i16> %b) {
 ; SSE4-LABEL: @add_v16i16_FEuCBA98765432u0(
 ; SSE4-NEXT:    [[TMP2:%.*]] = shufflevector <16 x i16> [[A:%.*]], <16 x i16> [[B:%.*]], <16 x i32> <i32 1, i32 poison, i32 5, i32 7, i32 17, i32 19, i32 21, i32 23, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
 ; SSE4-NEXT:    [[TMP10:%.*]] = shufflevector <16 x i16> [[TMP2]], <16 x i16> [[A]], <16 x i32> <i32 0, i32 poison, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 25, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
-; SSE4-NEXT:    [[TMP4:%.*]] = shufflevector <16 x i16> [[A]], <16 x i16> [[B]], <16 x i32> <i32 0, i32 poison, i32 4, i32 6, i32 16, i32 18, i32 20, i32 22, i32 8, i32 poison, i32 11, i32 12, i32 poison, i32 poison, i32 poison, i32 poison>
-; SSE4-NEXT:    [[TMP5:%.*]] = shufflevector <16 x i16> [[TMP10]], <16 x i16> [[A]], <16 x i32> <i32 0, i32 poison, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 poison, i32 26, i32 29, i32 poison, i32 poison, i32 poison, i32 poison>
+; SSE4-NEXT:    [[TMP4:%.*]] = shufflevector <16 x i16> [[A]], <16 x i16> [[B]], <16 x i32> <i32 0, i32 poison, i32 4, i32 6, i32 16, i32 18, i32 20, i32 22, i32 8, i32 11, i32 12, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
+; SSE4-NEXT:    [[TMP5:%.*]] = shufflevector <16 x i16> [[TMP10]], <16 x i16> [[A]], <16 x i32> <i32 0, i32 poison, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 26, i32 29, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
 ; SSE4-NEXT:    [[TMP6:%.*]] = add <16 x i16> [[TMP4]], [[TMP5]]
-; SSE4-NEXT:    [[TMP7:%.*]] = shufflevector <16 x i16> [[A]], <16 x i16> [[B]], <16 x i32> <i32 14, i32 24, i32 28, i32 30, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
-; SSE4-NEXT:    [[TMP8:%.*]] = shufflevector <16 x i16> [[A]], <16 x i16> [[B]], <16 x i32> <i32 15, i32 25, i32 29, i32 31, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
+; SSE4-NEXT:    [[TMP7:%.*]] = shufflevector <16 x i16> [[A]], <16 x i16> [[B]], <16 x i32> <i32 14, i32 24, i32 poison, i32 28, i32 30, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
+; SSE4-NEXT:    [[TMP8:%.*]] = shufflevector <16 x i16> [[A]], <16 x i16> [[B]], <16 x i32> <i32 15, i32 25, i32 poison, i32 29, i32 31, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
 ; SSE4-NEXT:    [[TMP9:%.*]] = add <16 x i16> [[TMP7]], [[TMP8]]
-; SSE4-NEXT:    [[RESULT:%.*]] = shufflevector <16 x i16> [[TMP9]], <16 x i16> [[TMP6]], <16 x i32> <i32 3, i32 2, i32 poison, i32 1, i32 0, i32 27, i32 26, i32 24, i32 23, i32 22, i32 21, i32 20, i32 19, i32 18, i32 poison, i32 16>
+; SSE4-NEXT:    [[RESULT:%.*]] = shufflevector <16 x i16> [[TMP9]], <16 x i16> [[TMP6]], <16 x i32> <i32 4, i32 3, i32 poison, i32 1, i32 0, i32 26, i32 25, i32 24, i32 23, i32 22, i32 21, i32 20, i32 19, i32 18, i32 poison, i32 16>
 ; SSE4-NEXT:    ret <16 x i16> [[RESULT]]
 ;
 ; AVX2-LABEL: @add_v16i16_FEuCBA98765432u0(
@@ -1183,14 +1183,14 @@ define <8 x float> @add_v8f32_76u43210(<8 x float> %a, <8 x float> %b) {
 ; SSE2-NEXT:    ret <8 x float> [[RESULT]]
 ;
 ; SSE4-LABEL: @add_v8f32_76u43210(
-; SSE4-NEXT:    [[TMP4:%.*]] = shufflevector <8 x float> [[B:%.*]], <8 x float> [[A:%.*]], <8 x i32> <i32 6, i32 5, i32 poison, i32 0, i32 14, i32 12, i32 10, i32 8>
-; SSE4-NEXT:    [[TMP5:%.*]] = shufflevector <8 x float> [[B]], <8 x float> [[A]], <8 x i32> <i32 7, i32 4, i32 poison, i32 1, i32 15, i32 13, i32 11, i32 9>
+; SSE4-NEXT:    [[TMP4:%.*]] = shufflevector <8 x float> [[A:%.*]], <8 x float> [[B:%.*]], <8 x i32> <i32 14, i32 13, i32 poison, i32 8, i32 6, i32 4, i32 2, i32 0>
+; SSE4-NEXT:    [[TMP5:%.*]] = shufflevector <8 x float> [[A]], <8 x float> [[B]], <8 x i32> <i32 15, i32 12, i32 poison, i32 9, i32 7, i32 5, i32 3, i32 1>
 ; SSE4-NEXT:    [[TMP6:%.*]] = fadd <8 x float> [[TMP4]], [[TMP5]]
 ; SSE4-NEXT:    ret <8 x float> [[TMP6]]
 ;
 ; AVX-LABEL: @add_v8f32_76u43210(
-; AVX-NEXT:    [[TMP1:%.*]] = shufflevector <8 x float> [[B:%.*]], <8 x float> [[A:%.*]], <8 x i32> <i32 6, i32 5, i32 poison, i32 0, i32 14, i32 12, i32 10, i32 8>
-; AVX-NEXT:    [[TMP2:%.*]] = shufflevector <8 x float> [[B]], <8 x float> [[A]], <8 x i32> <i32 7, i32 4, i32 poison, i32 1, i32 15, i32 13, i32 11, i32 9>
+; AVX-NEXT:    [[TMP1:%.*]] = shufflevector <8 x float> [[A:%.*]], <8 x float> [[B:%.*]], <8 x i32> <i32 14, i32 13, i32 poison, i32 8, i32 6, i32 4, i32 2, i32 0>
+; AVX-NEXT:    [[TMP2:%.*]] = shufflevector <8 x float> [[A]], <8 x float> [[B]], <8 x i32> <i32 15, i32 12, i32 poison, i32 9, i32 7, i32 5, i32 3, i32 1>
 ; AVX-NEXT:    [[RESULT:%.*]] = fadd <8 x float> [[TMP1]], [[TMP2]]
 ; AVX-NEXT:    ret <8 x float> [[RESULT]]
 ;
diff --git a/llvm/test/Transforms/PhaseOrdering/X86/hsub.ll b/llvm/test/Transforms/PhaseOrdering/X86/hsub.ll
index fd160b7c57024..98d35f862d418 100644
--- a/llvm/test/Transforms/PhaseOrdering/X86/hsub.ll
+++ b/llvm/test/Transforms/PhaseOrdering/X86/hsub.ll
@@ -121,12 +121,12 @@ define <8 x i16> @sub_v8i16_u1234567(<8 x i16> %a, <8 x i16> %b) {
 
 define <8 x i16> @sub_v8i16_76u43210(<8 x i16> %a, <8 x i16> %b) {
 ; SSE2-LABEL: @sub_v8i16_76u43210(
-; SSE2-NEXT:    [[SHIFT:%.*]] = shufflevector <8 x i16> [[A:%.*]], <8 x i16> poison, <8 x i32> <i32 1, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
-; SSE2-NEXT:    [[TMP1:%.*]] = sub <8 x i16> [[A]], [[SHIFT]]
 ; SSE2-NEXT:    [[SHIFT2:%.*]] = shufflevector <8 x i16> [[B:%.*]], <8 x i16> poison, <8 x i32> <i32 poison, i32 poison, i32 poison, i32 poison, i32 5, i32 poison, i32 poison, i32 poison>
 ; SSE2-NEXT:    [[TMP2:%.*]] = sub <8 x i16> [[B]], [[SHIFT2]]
 ; SSE2-NEXT:    [[SHIFT3:%.*]] = shufflevector <8 x i16> [[B]], <8 x i16> poison, <8 x i32> <i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 6>
 ; SSE2-NEXT:    [[TMP3:%.*]] = sub <8 x i16> [[SHIFT3]], [[B]]
+; SSE2-NEXT:    [[TMP7:%.*]] = shufflevector <8 x i16> [[A:%.*]], <8 x i16> poison, <8 x i32> <i32 1, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
+; SSE2-NEXT:    [[TMP1:%.*]] = sub <8 x i16> [[A]], [[TMP7]]
 ; SSE2-NEXT:    [[TMP4:%.*]] = shufflevector <8 x i16> [[A]], <8 x i16> [[B]], <8 x i32> <i32 2, i32 4, i32 6, i32 8, i32 poison, i32 poison, i32 poison, i32 poison>
 ; SSE2-NEXT:    [[TMP5:%.*]] = shufflevector <8 x i16> [[A]], <8 x i16> [[B]], <8 x i32> <i32 3, i32 5, i32 7, i32 9, i32 poison, i32 poison, i32 poison, i32 poison>
 ; SSE2-NEXT:    [[TMP6:%.*]] = sub <8 x i16> [[TMP4]], [[TMP5]]
@@ -398,13 +398,13 @@ define <16 x i16> @sub_v16i16_FEuCBA98765432u0(<16 x i16> %a, <16 x i16> %b) {
 ; SSE4-LABEL: @sub_v16i16_FEuCBA98765432u0(
 ; SSE4-NEXT:    [[TMP2:%.*]] = shufflevector <16 x i16> [[A:%.*]], <16 x i16> [[B:%.*]], <16 x i32> <i32 1, i32 poison, i32 5, i32 7, i32 17, i32 19, i32 21, i32 23, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
 ; SSE4-NEXT:    [[TMP10:%.*]] = shufflevector <16 x i16> [[TMP2]], <16 x i16> [[A]], <16 x i32> <i32 0, i32 poison, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 25, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
-; SSE4-NEXT:    [[TMP4:%.*]] = shufflevector <16 x i16> [[A]], <16 x i16> [[B]], <16 x i32> <i32 0, i32 poison, i32 4, i32 6, i32 16, i32 18, i32 20, i32 22, i32 8, i32 poison, i32 10, i32 12, i32 poison, i32 poison, i32 poison, i32 poison>
-; SSE4-NEXT:    [[TMP5:%.*]] = shufflevector <16 x i16> [[TMP10]], <16 x i16> [[A]], <16 x i32> <i32 0, i32 poison, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 poison, i32 27, i32 29, i32 poison, i32 poison, i32 poison, i32 poison>
+; SSE4-NEXT:    [[TMP4:%.*]] = shufflevector <16 x i16> [[A]], <16 x i16> [[B]], <16 x i32> <i32 0, i32 poison, i32 4, i32 6, i32 16, i32 18, i32 20, i32 22, i32 8, i32 10, i32 12, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
+; SSE4-NEXT:    [[TMP5:%.*]] = shufflevector <16 x i16> [[TMP10]], <16 x i16> [[A]], <16 x i32> <i32 0, i32 poison, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 27, i32 29, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
 ; SSE4-NEXT:    [[TMP6:%.*]] = sub <16 x i16> [[TMP4]], [[TMP5]]
-; SSE4-NEXT:    [[TMP7:%.*]] = shufflevector <16 x i16> [[A]], <16 x i16> [[B]], <16 x i32> <i32 14, i32 24, i32 28, i32 30, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
-; SSE4-NEXT:    [[TMP8:%.*]] = shufflevector <16 x i16> [[A]], <16 x i16> [[B]], <16 x i32> <i32 15, i32 25, i32 29, i32 31, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
+; SSE4-NEXT:    [[TMP7:%.*]] = shufflevector <16 x i16> [[A]], <16 x i16> [[B]], <16 x i32> <i32 14, i32 24, i32 poison, i32 28, i32 30, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
+; SSE4-NEXT:    [[TMP8:%.*]] = shufflevector <16 x i16> [[A]], <16 x i16> [[B]], <16 x i32> <i32 15, i32 25, i32 poison, i32 29, i32 31, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison, i32 poison>
 ; SSE4-NEXT:    [[TMP9:%.*]] = sub <16 x i16> [[TMP7]], [[TMP8]]
-; SSE4-NEXT:    [[RESULT:%.*]] = shufflevector <16 x i16> [[TMP9]], <16 x i16> [[TMP6]], <16 x i32> <i32 3, i32 2, i32 poison, i32 1, i32 0, i32 27, i32 26, i32 24, i32 23, i32 22, i32 21, i32 20, i32 19, i32 18, i32 poison, i32 16>
+; SSE4-NEXT:    [[RESULT:%.*]] = shufflevector <16 x i16> [[TMP9]], <16 x i16> [[TMP6]], <16 x i32> <i32 4, i32 3, i32 poison, i32 1, i32 0, i32 26, i32 25, i32 24, i32 23, i32 22, i32 21, i32 20, i32 19, i32 18, i32 poison, i32 16>
 ; SSE4-NEXT:    ret <16 x i16> [[RESULT]]
 ;
 ; AVX2-LABEL: @sub_v16i16_FEuCBA98765432u0(
@@ -1177,14 +1177,14 @@ define <8 x float> @sub_v8f32_76u43210(<8 x float> %a, <8 x float> %b) {
 ; SSE2-NEXT:    ret <8 x float> [[RESULT]]
 ;
 ; SSE4-LABEL: @sub_v8f32_76u43210(
-; SSE4-NEXT:    [[TMP4:%.*]] = shufflevector <8 x float> [[B:%.*]], <8 x float> [[A:%.*]], <8 x i32> <i32 6, i32 4, i32 poison, i32 0, i32 14, i32 12, i32 10, i32 8>
-; SSE4-NEXT:    [[TMP5:%.*]] = shufflevector <8 x float> [[B]], <8 x float> [[A]], <8 x i32> <i32 7, i32 5, i32 poison, i32 1, i32 15, i32 13, i32 11, i32 9>
+; SSE4-NEXT:    [[TMP4:%.*]] = shufflevector <8 x float> [[A:%.*]], <8 x float> [[B:%.*]], <8 x i32> <i32 14, i32 12, i32 poison, i32 8, i32 6, i32 4, i32 2, i32 0>
+; SSE4-NEXT:    [[TMP5:%.*]] = shufflevector <8 x float> [[A]], <8 x float> [[B]], <8 x i32> <i32 15, i32 13, i32 poison, i32 9, i32 7, i32 5, i32 3, i32 1>
 ; SSE4-NEXT:    [[TMP6:%.*]] = fsub <8 x float> [[TMP4]], [[TMP5]]
 ; SSE4-NEXT:    ret <8 x float> [[TMP6]]
 ;
 ; AVX-LABEL: @sub_v8f32_76u43210(
-; AVX-NEXT:    [[TMP1:%.*]] = shufflevector <8 x float> [[B:%.*]], <8 x float> [[A:%.*]], <8 x i32> <i32 6, i32 4, i32 poison, i32 0, i32 14, i32 12, i32 10, i32 8>
-; AVX-NEXT:    [[TMP2:%.*]] = shufflevector <8 x float> [[B]], <8 x float> [[A]], <8 x i32> <i32 7, i32 5, i32 poison, i32 1, i32 15, i32 13, i32 11, i32 9>
+; AVX-NEXT:    [[TMP1:%.*]] = shufflevector <8 x float> [[A:%.*]], <8 x float> [[B:%.*]], <8 x i32> <i32 14, i32 12, i32 poison, i32 8, i32 6, i32 4, i32 2, i32 0>
+; AVX-NEXT:    [[TMP2:%.*]] = shufflevector <8 x float> [[A]], <8 x float> [[B]], <8 x i32> <i32 15, i32 13, i32 poison, i32 9, i32 7, i32 5, i32 3, i32 1>
 ; AVX-NEXT:    [[RESULT:%.*]] = fsub <8 x float> [[TMP1]], [[TMP2]]
 ; AVX-NEXT:    ret <8 x float> [[RESULT]]
 ;
diff --git a/llvm/test/Transforms/VectorCombine/AArch64/ext-extract.ll b/llvm/test/Transforms/VectorCombine/AArch64/ext-extract.ll
index 60700412686ea..7358ebf637662 100644
--- a/llvm/test/Transforms/VectorCombine/AArch64/ext-extract.ll
+++ b/llvm/test/Transforms/VectorCombine/AArch64/ext-extract.ll
@@ -17,11 +17,21 @@ define void @zext_v4i8_all_lanes_used(<4 x i8> %src) {
 ; CHECK-NEXT:    [[TMP6:%.*]] = lshr i32 [[TMP1]], 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = and i32 [[TMP6]], 255
 ; CHECK-NEXT:    [[TMP9:%.*]] = and i32 [[TMP1]], 255
-; CHECK-NEXT:    [[EXT9:%.*]] = zext nneg <4 x i8> [[SRC]] to <4 x i32>
-; CHECK-NEXT:    [[EXT_0:%.*]] = extractelement <4 x i32> [[EXT9]], i64 0
-; CHECK-NEXT:    [[EXT_1:%.*]] = extractelement <4 x i32> [[EXT9]], i64 1
-; CHECK-NEXT:    [[EXT_2:%.*]] = extractelement <4 x i32> [[EXT9]], i64 2
-; CHECK-NEXT:    [[EXT_3:%.*]] = extractelement <4 x i32> [[EXT9]], i64 3
+; CHECK-NEXT:    [[TMP8:%.*]] = freeze <4 x i8> [[SRC]]
+; CHECK-NEXT:    [[TMP23:%.*]] = bitcast <4 x i8> [[TMP8]] to i32
+; CHECK-NEXT:    [[TMP10:%.*]] = lshr i32 [[TMP23]], 24
+; CHECK-NEXT:    [[TMP11:%.*]] = lshr i32 [[TMP23]], 16
+; CHECK-NEXT:    [[TMP12:%.*]] = and i32 [[TMP11]], 255
+; CHECK-NEXT:    [[TMP13:%.*]] = lshr i32 [[TMP23]], 8
+; CHECK-NEXT:    [[TMP14:%.*]] = and i32 [[TMP13]], 255
+; CHECK-NEXT:    [[TMP15:%.*]] = freeze <4 x i8> [[SRC]]
+; CHECK-NEXT:    [[TMP16:%.*]] = bitcast <4 x i8> [[TMP15]] to i32
+; CHECK-NEXT:    [[TMP17:%.*]] = lshr i32 [[TMP16]], 24
+; CHECK-NEXT:    [[TMP18:%.*]] = lshr i32 [[TMP16]], 16
+; CHECK-NEXT:    [[TMP19:%.*]] = and i32 [[TMP18]], 255
+; CHECK-NEXT:    [[TMP20:%.*]] = freeze <4 x i8> [[SRC]]
+; CHECK-NEXT:    [[TMP21:%.*]] = bitcast <4 x i8> [[TMP20]] to i32
+; CHECK-NEXT:    [[TMP22:%.*]] = lshr i32 [[TMP21]], 24
 ; CHECK-NEXT:    call void @use.i32(i32 [[TMP9]])
 ; CHECK-NEXT:    call void @use.i32(i32 [[TMP7]])
 ; CHECK-NEXT:    call void @use.i32(i32 [[TMP5]])
@@ -83,10 +93,14 @@ define void @zext_v4i8_3_lanes_used_1(<4 x i8> %src) {
 ; CHECK-NEXT:    [[TMP5:%.*]] = and i32 [[TMP4]], 255
 ; CHECK-NEXT:    [[TMP6:%.*]] = lshr i32 [[TMP1]], 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = and i32 [[TMP6]], 255
-; CHECK-NEXT:    [[EXT9:%.*]] = zext nneg <4 x i8> [[SRC]] to <4 x i32>
-; CHECK-NEXT:    [[EXT_1:%.*]] = extractelement <4 x i32> [[EXT9]], i64 1
-; CHECK-NEXT:    [[EXT_2:%.*]] = extractelement <4 x i32> [[EXT9]], i64 2
-; CHECK-NEXT:    [[EXT_3:%.*]] = extractelement <4 x i32> [[EXT9]], i64 3
+; CHECK-NEXT:    [[TMP15:%.*]] = freeze <4 x i8> [[SRC]]
+; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <4 x i8> [[TMP15]] to i32
+; CHECK-NEXT:    [[TMP9:%.*]] = lshr i32 [[TMP8]], 24
+; CHECK-NEXT:    [[TMP10:%.*]] = lshr i32 [[TMP8]], 16
+; CHECK-NEXT:    [[TMP11:%.*]] = and i32 [[TMP10]], 255
+; CHECK-NEXT:    [[TMP12:%.*]] = freeze <4 x i8> [[SRC]]
+; CHECK-NEXT:    [[TMP13:%.*]] = bitcast <4 x i8> [[TMP12]] to i32
+; CHECK-NEXT:    [[TMP14:%.*]] = lshr i32 [[TMP13]], 24
 ; CHECK-NEXT:    call void @use.i32(i32 [[TMP7]])
 ; CHECK-NEXT:    call void @use.i32(i32 [[TMP5]])
 ; CHECK-NEXT:    call void @use.i32(i32 [[TMP2]])
@@ -114,10 +128,14 @@ define void @zext_v4i8_3_lanes_used_2(<4 x i8> %src) {
 ; CHECK-NEXT:    [[TMP4:%.*]] = lshr i32 [[TMP1]], 8
 ; CHECK-NEXT:    [[TMP5:%.*]] = and i32 [[TMP4]], 255
 ; CHECK-NEXT:    [[TMP7:%.*]] = and i32 [[TMP1]], 255
-; CHECK-NEXT:    [[EXT9:%.*]] = zext nneg <4 x i8> [[SRC]] to <4 x i32>
-; CHECK-NEXT:    [[EXT_0:%.*]] = extractelement <4 x i32> [[EXT9]], i64 0
-; CHECK-NEXT:    [[EXT_1:%.*]] = extractelement <4 x i32> [[EXT9]], i64 1
-; CHECK-NEXT:    [[EXT_3:%.*]] = extractelement <4 x i32> [[EXT9]], i64 3
+; CHECK-NEXT:    [[TMP6:%.*]] = freeze <4 x i8> [[SRC]]
+; CHECK-NEXT:    [[TMP14:%.*]] = bitcast <4 x i8> [[TMP6]] to i32
+; CHECK-NEXT:    [[TMP8:%.*]] = lshr i32 [[TMP14]], 24
+; CHECK-NEXT:    [[TMP9:%.*]] = lshr i32 [[TMP14]], 8
+; CHECK-NEXT:    [[TMP10:%.*]] = and i32 [[TMP9]], 255
+; CHECK-NEXT:    [[TMP11:%.*]] = freeze <4 x i8> [[SRC]]
+; CHECK-NEXT:    [[TMP12:%.*]] = bitcast <4 x i8> [[TMP11]] to i32
+; CHECK-NEXT:    [[TMP13:%.*]] = lshr i32 [[TMP12]], 24
 ; CHECK-NEXT:    call void @use.i32(i32 [[TMP7]])
 ; CHECK-NEXT:    call void @use.i32(i32 [[TMP5]])
 ; CHECK-NEXT:    call void @use.i32(i32 [[TMP2]])
@@ -145,9 +163,10 @@ define void @zext_v4i8_2_lanes_used_1(<4 x i8> %src) {
 ; CHECK-NEXT:    [[TMP3:%.*]] = and i32 [[TMP2]], 255
 ; CHECK-NEXT:    [[TMP4:%.*]] = lshr i32 [[TMP1]], 8
 ; CHECK-NEXT:    [[TMP5:%.*]] = and i32 [[TMP4]], 255
-; CHECK-NEXT:    [[EXT9:%.*]] = zext nneg <4 x i8> [[SRC]] to <4 x i32>
-; CHECK-NEXT:    [[EXT_1:%.*]] = extractelement <4 x i32> [[EXT9]], i64 1
-; CHECK-NEXT:    [[EXT_2:%.*]] = extractelement <4 x i32> [[EXT9]], i64 2
+; CHECK-NEXT:    [[TMP6:%.*]] = freeze <4 x i8> [[SRC]]
+; CHECK-NEXT:    [[TMP7:%.*]] = bitcast <4 x i8> [[TMP6]] to i32
+; CHECK-NEXT:    [[TMP8:%.*]] = lshr i32 [[TMP7]], 16
+; CHECK-NEXT:    [[TMP9:%.*]] = and i32 [[TMP8]], 255
 ; CHECK-NEXT:    call void @use.i32(i32 [[TMP5]])
 ; CHECK-NEXT:    call void @use.i32(i32 [[TMP3]])
 ; CHECK-NEXT:    ret void
@@ -171,9 +190,10 @@ define void @zext_v4i8_2_lanes_used_2(<4 x i8> %src) {
 ; CHECK-NEXT:    [[TMP2:%.*]] = lshr i32 [[TMP1]], 16
 ; CHECK-NEXT:    [[TMP3:%.*]] = and i32 [[TMP2]], 255
 ; CHECK-NEXT:    [[TMP5:%.*]] = and i32 [[TMP1]], 255
-; CHECK-NEXT:    [[EXT9:%.*]] = zext nneg <4 x i8> [[SRC]] to <4 x i32>
-; CHECK-NEXT:    [...
[truncated]

Copy link

github-actions bot commented Jul 16, 2025

✅ With the latest revision this PR passed the undef deprecator.

Copy link

github-actions bot commented Jul 16, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

Copy link
Contributor

@nikic nikic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Compile-time: https://llvm-compile-time-tracker.com/compare.php?from=60579ec3059b2b6cc9dad90eaac1ed363fc395a7&to=7322d4b393c94af7164439419100d9c25df1e420&stat=instructions:u

Adding the entire function to the worklist at once is quite expensive...

Possibly we could do something more targeted at removing dead instructions early?

@davemgreen
Copy link
Collaborator Author

davemgreen commented Jul 17, 2025

Compile-time: https://llvm-compile-time-tracker.com/compare.php?from=60579ec3059b2b6cc9dad90eaac1ed363fc395a7&to=7322d4b393c94af7164439419100d9c25df1e420&stat=instructions:u

Adding the entire function to the worklist at once is quite expensive...

Possibly we could do something more targeted at removing dead instructions early?

Thanks, Yeah it is probably not be very useful for a lot of node - many will not be vector instructions. I will look into erasing the instructions as we replace them if we can.

@davemgreen davemgreen marked this pull request as draft July 19, 2025 16:14
@davemgreen davemgreen force-pushed the gh-vc-worklistorder branch from f13ab03 to c35d782 Compare July 19, 2025 16:15
@davemgreen davemgreen marked this pull request as ready for review August 1, 2025 19:04
@davemgreen davemgreen force-pushed the gh-vc-worklistorder branch from c35d782 to 9cf4293 Compare August 1, 2025 19:04
@davemgreen davemgreen changed the title [VectorCombine] Add initial nodes to the Worklist in VectorCombine [VectorCombine] Remove dead node immediately in VectorCombine Aug 3, 2025
@davemgreen davemgreen force-pushed the gh-vc-worklistorder branch from 9cf4293 to 68ad445 Compare August 3, 2025 14:58
This tries to mirror how InstructionWorklist is used in InstCombine, adding the
nodes initially to a list that is added in reverse order to the Worklist.  The
general order should be the same, the main advantage of this is that as node
are initially processed, when altered the New and Old instructions are visited
immediately, helping remove Old instructions as they are replaced, helping
other combines work without hitting OneUse checks.
@davemgreen davemgreen force-pushed the gh-vc-worklistorder branch from 68ad445 to c669a23 Compare August 8, 2025 12:01
@davemgreen
Copy link
Collaborator Author

I think this is OK to go now. It was running into issues, but those have been ironed out. I moved one unrelated part into #152675.

Copy link
Collaborator

@RKSimon RKSimon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - one trivial - but please update against trunk latest before merging in case there's any recent changes

}

/// Try to reduce extract element costs by converting scalar binops to vector
/// binops followed by extract.
/// bo (ext0 V0, C), (ext1 V1, C)
void VectorCombine::foldExtExtBinop(ExtractElementInst *Ext0,
ExtractElementInst *Ext1, Instruction &I) {
Value *VectorCombine::foldExtExtBinop(Value *V0, Value *V1, Value *ExtIndex,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(pedantic) update the comment to use ExtIndex (or another common varname)

}

/// Try to reduce extract element costs by converting scalar compares to vector
/// compares followed by extract.
/// cmp (ext0 V0, C), (ext1 V1, C)
void VectorCombine::foldExtExtCmp(ExtractElementInst *Ext0,
ExtractElementInst *Ext1, Instruction &I) {
Value *VectorCombine::foldExtExtCmp(Value *V0, Value *V1, Value *ExtIndex,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(pedantic) update the comment to use ExtIndex (or another common varname)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants