V8 Project
v8::internal::compiler::ScheduleLateNodeVisitor Class Reference
+ Inheritance diagram for v8::internal::compiler::ScheduleLateNodeVisitor:
+ Collaboration diagram for v8::internal::compiler::ScheduleLateNodeVisitor:

Public Member Functions

 ScheduleLateNodeVisitor (Scheduler *scheduler)
 
GenericGraphVisit::Control Pre (Node *node)
 
- Public Member Functions inherited from v8::internal::compiler::GenericGraphVisit::NullNodeVisitor< B, S >
Control Pre (GenericNode< B, S > *node)
 
Control Post (GenericNode< B, S > *node)
 
void PreEdge (GenericNode< B, S > *from, int index, GenericNode< B, S > *to)
 
void PostEdge (GenericNode< B, S > *from, int index, GenericNode< B, S > *to)
 

Private Member Functions

BasicBlock * GetBlockForUse (Node::Edge edge)
 
void ScheduleNode (BasicBlock *block, Node *node)
 

Private Attributes

Schedulerscheduler_
 
Scheduleschedule_
 

Detailed Description

Definition at line 479 of file scheduler.cc.

Constructor & Destructor Documentation

◆ ScheduleLateNodeVisitor()

v8::internal::compiler::ScheduleLateNodeVisitor::ScheduleLateNodeVisitor ( Scheduler scheduler)
inlineexplicit

Member Function Documentation

◆ GetBlockForUse()

BasicBlock* v8::internal::compiler::ScheduleLateNodeVisitor::GetBlockForUse ( Node::Edge  edge)
inlineprivate

Definition at line 547 of file scheduler.cc.

547  {
548  Node* use = edge.from();
549  IrOpcode::Value opcode = use->opcode();
550  if (opcode == IrOpcode::kPhi || opcode == IrOpcode::kEffectPhi) {
551  // If the use is from a fixed (i.e. non-floating) phi, use the block
552  // of the corresponding control input to the merge.
553  int index = edge.index();
555  Trace(" input@%d into a fixed phi #%d:%s\n", index, use->id(),
556  use->op()->mnemonic());
557  Node* merge = NodeProperties::GetControlInput(use, 0);
558  opcode = merge->opcode();
559  DCHECK(opcode == IrOpcode::kMerge || opcode == IrOpcode::kLoop);
560  use = NodeProperties::GetControlInput(merge, index);
561  }
562  }
563  BasicBlock* result = schedule_->block(use);
564  if (result == NULL) return NULL;
565  Trace(" must dominate use #%d:%s in B%d\n", use->id(),
566  use->op()->mnemonic(), result->id());
567  return result;
568  }
static Node * GetControlInput(Node *node, int index=0)
BasicBlock * block(Node *node) const
Definition: schedule.h:171
Placement GetPlacement(Node *node)
Definition: scheduler.cc:262
enable harmony numeric enable harmony object literal extensions Optimize object Array DOM strings and string trace pretenuring decisions of HAllocate instructions Enables optimizations which favor memory size over execution speed maximum source size in bytes considered for a single inlining maximum cumulative number of AST nodes considered for inlining trace the tracking of allocation sites deoptimize every n garbage collections perform array bounds checks elimination analyze liveness of environment slots and zap dead values flushes the cache of optimized code for closures on every GC allow uint32 values on optimize frames if they are used only in safe operations track concurrent recompilation artificial compilation delay in ms do not emit check maps for constant values that have a leaf deoptimize the optimized code if the layout of the maps changes enable context specialization in TurboFan execution budget before interrupt is triggered max percentage of megamorphic generic ICs to allow optimization enable use of SAHF instruction if enable use of VFP3 instructions if available enable use of NEON instructions if enable use of SDIV and UDIV instructions if enable use of MLS instructions if enable loading bit constant by means of movw movt instruction enable unaligned accesses for enable use of d16 d31 registers on ARM this requires VFP3 force all emitted branches to be in long enable alignment of csp to bytes on platforms which prefer the register to always be expose gc extension under the specified name show built in functions in stack traces use random jit cookie to mask large constants minimum length for automatic enable preparsing CPU profiler sampling interval in microseconds trace out of bounds accesses to external arrays default size of stack region v8 is allowed to use(in kBytes)") DEFINE_INT(max_stack_trace_source_length
enable harmony numeric enable harmony object literal extensions Optimize object Array DOM strings and string trace pretenuring decisions of HAllocate instructions Enables optimizations which favor memory size over execution speed maximum source size in bytes considered for a single inlining maximum cumulative number of AST nodes considered for inlining trace the tracking of allocation sites deoptimize every n garbage collections perform array bounds checks elimination analyze liveness of environment slots and zap dead values flushes the cache of optimized code for closures on every GC allow uint32 values on optimize frames if they are used only in safe operations track concurrent recompilation artificial compilation delay in ms do not emit check maps for constant values that have a leaf deoptimize the optimized code if the layout of the maps changes enable context specialization in TurboFan execution budget before interrupt is triggered max percentage of megamorphic generic ICs to allow optimization enable use of SAHF instruction if enable use of VFP3 instructions if available enable use of NEON instructions if enable use of SDIV and UDIV instructions if enable use of MLS instructions if enable loading bit constant by means of movw movt instruction enable unaligned accesses for enable use of d16 d31 registers on ARM this requires VFP3 force all emitted branches to be in long enable alignment of csp to bytes on platforms which prefer the register to always be NULL
#define DCHECK(condition)
Definition: logging.h:205
static void Trace(const char *msg,...)
Definition: scheduler.cc:21

References v8::internal::compiler::Schedule::block(), DCHECK, v8::internal::compiler::NodeProperties::GetControlInput(), v8::internal::compiler::Scheduler::GetPlacement(), v8::internal::compiler::Scheduler::kFixed, NULL, schedule_, scheduler_, v8::internal::compiler::Trace(), and use().

Referenced by Pre().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ Pre()

GenericGraphVisit::Control v8::internal::compiler::ScheduleLateNodeVisitor::Pre ( Node *  node)
inline

Definition at line 484 of file scheduler.cc.

484  {
485  // Don't schedule nodes that are already scheduled.
486  if (schedule_->IsScheduled(node)) {
488  }
489  Scheduler::SchedulerData* data = scheduler_->GetData(node);
490  DCHECK_EQ(Scheduler::kSchedulable, data->placement_);
491 
492  // If all the uses of a node have been scheduled, then the node itself can
493  // be scheduled.
494  bool eligible = data->unscheduled_count_ == 0;
495  Trace("Testing for schedule eligibility for #%d:%s = %s\n", node->id(),
496  node->op()->mnemonic(), eligible ? "true" : "false");
497  if (!eligible) return GenericGraphVisit::DEFER;
498 
499  // Determine the dominating block for all of the uses of this node. It is
500  // the latest block that this node can be scheduled in.
501  BasicBlock* block = NULL;
502  for (Node::Uses::iterator i = node->uses().begin(); i != node->uses().end();
503  ++i) {
504  BasicBlock* use_block = GetBlockForUse(i.edge());
505  block = block == NULL ? use_block : use_block == NULL
506  ? block
508  block, use_block);
509  }
510  DCHECK(block != NULL);
511 
512  int min_rpo = data->minimum_rpo_;
513  Trace(
514  "Schedule late conservative for #%d:%s is B%d at loop depth %d, "
515  "minimum_rpo = %d\n",
516  node->id(), node->op()->mnemonic(), block->id(), block->loop_depth_,
517  min_rpo);
518  // Hoist nodes out of loops if possible. Nodes can be hoisted iteratively
519  // into enclosing loop pre-headers until they would preceed their
520  // ScheduleEarly position.
521  BasicBlock* hoist_block = block;
522  while (hoist_block != NULL && hoist_block->rpo_number_ >= min_rpo) {
523  if (hoist_block->loop_depth_ < block->loop_depth_) {
524  block = hoist_block;
525  Trace(" hoisting #%d:%s to block %d\n", node->id(),
526  node->op()->mnemonic(), block->id());
527  }
528  // Try to hoist to the pre-header of the loop header.
529  hoist_block = hoist_block->loop_header();
530  if (hoist_block != NULL) {
531  BasicBlock* pre_header = hoist_block->dominator_;
532  DCHECK(pre_header == NULL ||
533  *hoist_block->predecessors().begin() == pre_header);
534  Trace(
535  " hoist to pre-header B%d of loop header B%d, depth would be %d\n",
536  pre_header->id(), hoist_block->id(), pre_header->loop_depth_);
537  hoist_block = pre_header;
538  }
539  }
540 
541  ScheduleNode(block, node);
542 
544  }
BasicBlock * GetBlockForUse(Node::Edge edge)
Definition: scheduler.cc:547
void ScheduleNode(BasicBlock *block, Node *node)
Definition: scheduler.cc:570
bool IsScheduled(Node *node)
Definition: schedule.h:178
SchedulerData * GetData(Node *node)
Definition: scheduler.h:59
BasicBlock * GetCommonDominator(BasicBlock *b1, BasicBlock *b2)
Definition: scheduler.cc:308
#define DCHECK_EQ(v1, v2)
Definition: logging.h:206

References v8::internal::compiler::GenericGraphVisit::CONTINUE, DCHECK, DCHECK_EQ, v8::internal::compiler::GenericGraphVisit::DEFER, GetBlockForUse(), v8::internal::compiler::Scheduler::GetCommonDominator(), v8::internal::compiler::Scheduler::GetData(), v8::internal::compiler::Schedule::IsScheduled(), v8::internal::compiler::Scheduler::kSchedulable, v8::internal::compiler::Scheduler::SchedulerData::minimum_rpo_, NULL, v8::internal::compiler::Scheduler::SchedulerData::placement_, schedule_, ScheduleNode(), scheduler_, v8::internal::compiler::Trace(), and v8::internal::compiler::Scheduler::SchedulerData::unscheduled_count_.

+ Here is the call graph for this function:

◆ ScheduleNode()

void v8::internal::compiler::ScheduleLateNodeVisitor::ScheduleNode ( BasicBlock *  block,
Node *  node 
)
inlineprivate

Definition at line 570 of file scheduler.cc.

570  {
571  schedule_->PlanNode(block, node);
572  scheduler_->scheduled_nodes_[block->id()].push_back(node);
573 
574  // Reduce the use count of the node's inputs to potentially make them
575  // schedulable.
576  for (InputIter i = node->inputs().begin(); i != node->inputs().end(); ++i) {
577  Scheduler::SchedulerData* data = scheduler_->GetData(*i);
578  DCHECK(data->unscheduled_count_ > 0);
579  --data->unscheduled_count_;
580  if (FLAG_trace_turbo_scheduler) {
581  Trace(" Use count for #%d:%s (used by #%d:%s)-- = %d\n", (*i)->id(),
582  (*i)->op()->mnemonic(), i.edge().from()->id(),
583  i.edge().from()->op()->mnemonic(), data->unscheduled_count_);
584  if (data->unscheduled_count_ == 0) {
585  Trace(" newly eligible #%d:%s\n", (*i)->id(),
586  (*i)->op()->mnemonic());
587  }
588  }
589  }
590  }
void PlanNode(BasicBlock *block, Node *node)
Definition: schedule.h:210
NodeVectorVector scheduled_nodes_
Definition: scheduler.h:50
Node::Inputs::iterator InputIter
Definition: node.h:82

References DCHECK, v8::internal::compiler::Scheduler::GetData(), v8::internal::compiler::Schedule::PlanNode(), schedule_, v8::internal::compiler::Scheduler::scheduled_nodes_, scheduler_, v8::internal::compiler::Trace(), and v8::internal::compiler::Scheduler::SchedulerData::unscheduled_count_.

Referenced by Pre().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

Member Data Documentation

◆ schedule_

Schedule* v8::internal::compiler::ScheduleLateNodeVisitor::schedule_
private

Definition at line 593 of file scheduler.cc.

Referenced by GetBlockForUse(), Pre(), and ScheduleNode().

◆ scheduler_

Scheduler* v8::internal::compiler::ScheduleLateNodeVisitor::scheduler_
private

Definition at line 592 of file scheduler.cc.

Referenced by GetBlockForUse(), Pre(), and ScheduleNode().


The documentation for this class was generated from the following file: