V8 Project
v8::internal::CodeRange Class Reference

#include <spaces.h>

+ Collaboration diagram for v8::internal::CodeRange:

Classes

class  FreeBlock
 

Public Member Functions

 CodeRange (Isolate *isolate)
 
 ~CodeRange ()
 
bool SetUp (size_t requested_size)
 
void TearDown ()
 
bool valid ()
 
Address start ()
 
size_t size ()
 
bool contains (Address address)
 
MUST_USE_RESULT Address AllocateRawMemory (const size_t requested_size, const size_t commit_size, size_t *allocated)
 
bool CommitRawMemory (Address start, size_t length)
 
bool UncommitRawMemory (Address start, size_t length)
 
void FreeRawMemory (Address buf, size_t length)
 

Private Member Functions

bool GetNextAllocationBlock (size_t requested)
 
 DISALLOW_COPY_AND_ASSIGN (CodeRange)
 

Static Private Member Functions

static int CompareFreeBlockAddress (const FreeBlock *left, const FreeBlock *right)
 

Private Attributes

Isolateisolate_
 
base::VirtualMemorycode_range_
 
List< FreeBlockfree_list_
 
List< FreeBlockallocation_list_
 
int current_allocation_block_index_
 

Detailed Description

Definition at line 864 of file spaces.h.

Constructor & Destructor Documentation

◆ CodeRange()

v8::internal::CodeRange::CodeRange ( Isolate isolate)
explicit

Definition at line 91 of file spaces.cc.

92  : isolate_(isolate),
94  free_list_(0),
int current_allocation_block_index_
Definition: spaces.h:933
base::VirtualMemory * code_range_
Definition: spaces.h:907
List< FreeBlock > allocation_list_
Definition: spaces.h:932
Isolate * isolate_
Definition: spaces.h:904
List< FreeBlock > free_list_
Definition: spaces.h:929
enable harmony numeric enable harmony object literal extensions Optimize object Array DOM strings and string trace pretenuring decisions of HAllocate instructions Enables optimizations which favor memory size over execution speed maximum source size in bytes considered for a single inlining maximum cumulative number of AST nodes considered for inlining trace the tracking of allocation sites deoptimize every n garbage collections perform array bounds checks elimination analyze liveness of environment slots and zap dead values flushes the cache of optimized code for closures on every GC allow uint32 values on optimize frames if they are used only in safe operations track concurrent recompilation artificial compilation delay in ms do not emit check maps for constant values that have a leaf deoptimize the optimized code if the layout of the maps changes enable context specialization in TurboFan execution budget before interrupt is triggered max percentage of megamorphic generic ICs to allow optimization enable use of SAHF instruction if enable use of VFP3 instructions if available enable use of NEON instructions if enable use of SDIV and UDIV instructions if enable use of MLS instructions if enable loading bit constant by means of movw movt instruction enable unaligned accesses for enable use of d16 d31 registers on ARM this requires VFP3 force all emitted branches to be in long enable alignment of csp to bytes on platforms which prefer the register to always be NULL

◆ ~CodeRange()

v8::internal::CodeRange::~CodeRange ( )
inline

Definition at line 867 of file spaces.h.

867 { TearDown(); }

References TearDown().

+ Here is the call graph for this function:

Member Function Documentation

◆ AllocateRawMemory()

Address v8::internal::CodeRange::AllocateRawMemory ( const size_t  requested_size,
const size_t  commit_size,
size_t *  allocated 
)

Definition at line 186 of file spaces.cc.

188  {
189  DCHECK(commit_size <= requested_size);
190  DCHECK(allocation_list_.length() == 0 ||
192  if (allocation_list_.length() == 0 ||
193  requested_size > allocation_list_[current_allocation_block_index_].size) {
194  // Find an allocation block large enough.
195  if (!GetNextAllocationBlock(requested_size)) return NULL;
196  }
197  // Commit the requested memory at the start of the current allocation block.
198  size_t aligned_requested = RoundUp(requested_size, MemoryChunk::kAlignment);
200  if (aligned_requested >= (current.size - Page::kPageSize)) {
201  // Don't leave a small free block, useless for a large object or chunk.
202  *allocated = current.size;
203  } else {
204  *allocated = aligned_requested;
205  }
206  DCHECK(*allocated <= current.size);
209  code_range_, current.start, commit_size, *allocated)) {
210  *allocated = 0;
211  return NULL;
212  }
215  if (*allocated == current.size) {
216  // This block is used up, get the next one.
218  }
219  return current.start;
220 }
bool GetNextAllocationBlock(size_t requested)
Definition: spaces.cc:145
MemoryAllocator * memory_allocator()
Definition: isolate.h:883
MUST_USE_RESULT bool CommitExecutableMemory(base::VirtualMemory *vm, Address start, size_t commit_size, size_t reserved_size)
Definition: spaces.cc:834
static const intptr_t kAlignment
Definition: spaces.h:523
static const int kPageSize
Definition: spaces.h:748
#define DCHECK(condition)
Definition: logging.h:205
bool IsAddressAligned(Address addr, intptr_t alignment, int offset=0)
Definition: utils.h:129
static void RoundUp(Vector< char > buffer, int *length, int *decimal_point)
Definition: fixed-dtoa.cc:171

References allocation_list_, code_range_, v8::internal::MemoryAllocator::CommitExecutableMemory(), current_allocation_block_index_, DCHECK, GetNextAllocationBlock(), v8::internal::IsAddressAligned(), isolate_, v8::internal::MemoryChunk::kAlignment, v8::internal::Page::kPageSize, v8::internal::Isolate::memory_allocator(), NULL, v8::internal::RoundUp(), v8::internal::CodeRange::FreeBlock::size, and v8::internal::CodeRange::FreeBlock::start.

Referenced by v8::internal::MemoryAllocator::AllocateChunk().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ CommitRawMemory()

bool v8::internal::CodeRange::CommitRawMemory ( Address  start,
size_t  length 
)

Definition at line 223 of file spaces.cc.

223  {
225 }
bool CommitMemory(Address addr, size_t size, Executability executable)
Definition: spaces.cc:284

References v8::internal::MemoryAllocator::CommitMemory(), v8::internal::EXECUTABLE, isolate_, v8::internal::Isolate::memory_allocator(), and start().

Referenced by v8::internal::MemoryChunk::CommitArea().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ CompareFreeBlockAddress()

int v8::internal::CodeRange::CompareFreeBlockAddress ( const FreeBlock left,
const FreeBlock right 
)
staticprivate

Definition at line 136 of file spaces.cc.

137  {
138  // The entire point of CodeRange is that the difference between two
139  // addresses in the range can be represented as a signed 32-bit int,
140  // so the cast is semantically correct.
141  return static_cast<int>(left->start - right->start);
142 }

References v8::internal::CodeRange::FreeBlock::start.

Referenced by GetNextAllocationBlock().

+ Here is the caller graph for this function:

◆ contains()

bool v8::internal::CodeRange::contains ( Address  address)
inline

Definition at line 887 of file spaces.h.

887  {
888  if (!valid()) return false;
889  Address start = static_cast<Address>(code_range_->address());
890  return start <= address && address < start + code_range_->size();
891  }
byte * Address
Definition: globals.h:101

References v8::base::VirtualMemory::address(), code_range_, v8::base::VirtualMemory::size(), start(), and valid().

Referenced by v8::internal::Heap::AllocateCode(), v8::internal::Heap::CopyCode(), and v8::internal::MemoryAllocator::FreeMemory().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ DISALLOW_COPY_AND_ASSIGN()

v8::internal::CodeRange::DISALLOW_COPY_AND_ASSIGN ( CodeRange  )
private

◆ FreeRawMemory()

void v8::internal::CodeRange::FreeRawMemory ( Address  buf,
size_t  length 
)

Definition at line 233 of file spaces.cc.

233  {
235  free_list_.Add(FreeBlock(address, length));
236  code_range_->Uncommit(address, length);
237 }
bool Uncommit(void *address, size_t size)

References code_range_, DCHECK, free_list_, v8::internal::IsAddressAligned(), v8::internal::MemoryChunk::kAlignment, and v8::base::VirtualMemory::Uncommit().

Referenced by v8::internal::MemoryAllocator::FreeMemory().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ GetNextAllocationBlock()

bool v8::internal::CodeRange::GetNextAllocationBlock ( size_t  requested)
private

Definition at line 145 of file spaces.cc.

145  {
150  return true; // Found a large enough allocation block.
151  }
152  }
153 
154  // Sort and merge the free blocks on the free list and the allocation list.
156  allocation_list_.Clear();
158  for (int i = 0; i < free_list_.length();) {
159  FreeBlock merged = free_list_[i];
160  i++;
161  // Add adjacent free blocks to the current merged block.
162  while (i < free_list_.length() &&
163  free_list_[i].start == merged.start + merged.size) {
164  merged.size += free_list_[i].size;
165  i++;
166  }
167  if (merged.size > 0) {
168  allocation_list_.Add(merged);
169  }
170  }
171  free_list_.Clear();
172 
177  return true; // Found a large enough allocation block.
178  }
179  }
181  // Code range is full or too fragmented.
182  return false;
183 }
static int CompareFreeBlockAddress(const FreeBlock *left, const FreeBlock *right)
Definition: spaces.cc:136

References allocation_list_, CompareFreeBlockAddress(), current_allocation_block_index_, free_list_, size(), v8::internal::CodeRange::FreeBlock::size, and v8::internal::CodeRange::FreeBlock::start.

Referenced by AllocateRawMemory().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ SetUp()

bool v8::internal::CodeRange::SetUp ( size_t  requested_size)

Definition at line 99 of file spaces.cc.

99  {
100  DCHECK(code_range_ == NULL);
101 
102  if (requested == 0) {
103  // When a target requires the code range feature, we put all code objects
104  // in a kMaximalCodeRangeSize range of virtual address space, so that
105  // they can call each other with near calls.
106  if (kRequiresCodeRange) {
107  requested = kMaximalCodeRangeSize;
108  } else {
109  return true;
110  }
111  }
112 
114  code_range_ = new base::VirtualMemory(requested);
115  CHECK(code_range_ != NULL);
116  if (!code_range_->IsReserved()) {
117  delete code_range_;
118  code_range_ = NULL;
119  return false;
120  }
121 
122  // We are sure that we have mapped a block of requested addresses.
123  DCHECK(code_range_->size() == requested);
124  LOG(isolate_, NewEvent("CodeRange", code_range_->address(), requested));
125  Address base = reinterpret_cast<Address>(code_range_->address());
126  Address aligned_base =
127  RoundUp(reinterpret_cast<Address>(code_range_->address()),
129  size_t size = code_range_->size() - (aligned_base - base);
130  allocation_list_.Add(FreeBlock(aligned_base, size));
132  return true;
133 }
#define LOG(isolate, Call)
Definition: log.h:69
#define CHECK(condition)
Definition: logging.h:36
const size_t kMaximalCodeRangeSize
Definition: globals.h:156
const bool kRequiresCodeRange
Definition: globals.h:155

References v8::base::VirtualMemory::address(), allocation_list_, CHECK, code_range_, current_allocation_block_index_, DCHECK, isolate_, v8::base::VirtualMemory::IsReserved(), v8::internal::MemoryChunk::kAlignment, v8::internal::kMaximalCodeRangeSize, v8::internal::kRequiresCodeRange, LOG, NULL, v8::internal::RoundUp(), v8::base::VirtualMemory::size(), and size().

Referenced by v8::internal::Heap::SetUp().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ size()

size_t v8::internal::CodeRange::size ( )
inline

Definition at line 883 of file spaces.h.

883  {
884  DCHECK(valid());
885  return code_range_->size();
886  }

References code_range_, DCHECK, v8::base::VirtualMemory::size(), and valid().

Referenced by v8::Isolate::GetCodeRange(), GetNextAllocationBlock(), and SetUp().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ start()

Address v8::internal::CodeRange::start ( )
inline

Definition at line 879 of file spaces.h.

879  {
880  DCHECK(valid());
881  return static_cast<Address>(code_range_->address());
882  }

References v8::base::VirtualMemory::address(), code_range_, DCHECK, and valid().

Referenced by CommitRawMemory(), contains(), v8::Isolate::GetCodeRange(), v8::internal::Assembler::runtime_entry_at(), and UncommitRawMemory().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ TearDown()

void v8::internal::CodeRange::TearDown ( )

Definition at line 240 of file spaces.cc.

240  {
241  delete code_range_; // Frees all memory in the virtual memory range.
242  code_range_ = NULL;
243  free_list_.Free();
244  allocation_list_.Free();
245 }

References allocation_list_, code_range_, free_list_, and NULL.

Referenced by ~CodeRange().

+ Here is the caller graph for this function:

◆ UncommitRawMemory()

bool v8::internal::CodeRange::UncommitRawMemory ( Address  start,
size_t  length 
)

Definition at line 228 of file spaces.cc.

228  {
229  return code_range_->Uncommit(start, length);
230 }

References code_range_, start(), and v8::base::VirtualMemory::Uncommit().

Referenced by v8::internal::MemoryChunk::CommitArea().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ valid()

bool v8::internal::CodeRange::valid ( )
inline

Definition at line 878 of file spaces.h.

878 { return code_range_ != NULL; }

References code_range_, and NULL.

Referenced by v8::internal::MemoryAllocator::AllocateChunk(), v8::internal::Heap::AllocateCode(), v8::internal::MemoryChunk::CommitArea(), contains(), v8::internal::Heap::CopyCode(), v8::internal::MemoryAllocator::FreeMemory(), v8::Isolate::GetCodeRange(), size(), v8::internal::PagedSpace::SizeOfFirstPage(), and start().

+ Here is the caller graph for this function:

Member Data Documentation

◆ allocation_list_

List<FreeBlock> v8::internal::CodeRange::allocation_list_
private

Definition at line 932 of file spaces.h.

Referenced by AllocateRawMemory(), GetNextAllocationBlock(), SetUp(), and TearDown().

◆ code_range_

base::VirtualMemory* v8::internal::CodeRange::code_range_
private

◆ current_allocation_block_index_

int v8::internal::CodeRange::current_allocation_block_index_
private

Definition at line 933 of file spaces.h.

Referenced by AllocateRawMemory(), GetNextAllocationBlock(), and SetUp().

◆ free_list_

List<FreeBlock> v8::internal::CodeRange::free_list_
private

Definition at line 929 of file spaces.h.

Referenced by FreeRawMemory(), GetNextAllocationBlock(), and TearDown().

◆ isolate_

Isolate* v8::internal::CodeRange::isolate_
private

Definition at line 904 of file spaces.h.

Referenced by AllocateRawMemory(), CommitRawMemory(), and SetUp().


The documentation for this class was generated from the following files: