duckstation

duckstation, but archived from the revision just before upstream changed it to a proprietary software project, this version is the libre one
git clone https://git.neptards.moe/u3shit/duckstation.git
Log | Files | Refs | README | LICENSE

vk_mem_alloc.h (713348B)


      1 //
      2 // Copyright (c) 2017-2024 Advanced Micro Devices, Inc. All rights reserved.
      3 //
      4 // Permission is hereby granted, free of charge, to any person obtaining a copy
      5 // of this software and associated documentation files (the "Software"), to deal
      6 // in the Software without restriction, including without limitation the rights
      7 // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
      8 // copies of the Software, and to permit persons to whom the Software is
      9 // furnished to do so, subject to the following conditions:
     10 //
     11 // The above copyright notice and this permission notice shall be included in
     12 // all copies or substantial portions of the Software.
     13 //
     14 // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
     15 // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
     16 // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL THE
     17 // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
     18 // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
     19 // OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
     20 // THE SOFTWARE.
     21 //
     22 
     23 #ifndef AMD_VULKAN_MEMORY_ALLOCATOR_H
     24 #define AMD_VULKAN_MEMORY_ALLOCATOR_H
     25 
     26 /** \mainpage Vulkan Memory Allocator
     27 
     28 <b>Version 3.1.0</b>
     29 
     30 Copyright (c) 2017-2024 Advanced Micro Devices, Inc. All rights reserved. \n
     31 License: MIT \n
     32 See also: [product page on GPUOpen](https://gpuopen.com/gaming-product/vulkan-memory-allocator/),
     33 [repository on GitHub](https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator)
     34 
     35 
     36 <b>API documentation divided into groups:</b> [Topics](topics.html)
     37 
     38 <b>General documentation chapters:</b>
     39 
     40 - <b>User guide</b>
     41   - \subpage quick_start
     42     - [Project setup](@ref quick_start_project_setup)
     43     - [Initialization](@ref quick_start_initialization)
     44     - [Resource allocation](@ref quick_start_resource_allocation)
     45   - \subpage choosing_memory_type
     46     - [Usage](@ref choosing_memory_type_usage)
     47     - [Required and preferred flags](@ref choosing_memory_type_required_preferred_flags)
     48     - [Explicit memory types](@ref choosing_memory_type_explicit_memory_types)
     49     - [Custom memory pools](@ref choosing_memory_type_custom_memory_pools)
     50     - [Dedicated allocations](@ref choosing_memory_type_dedicated_allocations)
     51   - \subpage memory_mapping
     52     - [Copy functions](@ref memory_mapping_copy_functions)
     53     - [Mapping functions](@ref memory_mapping_mapping_functions)
     54     - [Persistently mapped memory](@ref memory_mapping_persistently_mapped_memory)
     55     - [Cache flush and invalidate](@ref memory_mapping_cache_control)
     56   - \subpage staying_within_budget
     57     - [Querying for budget](@ref staying_within_budget_querying_for_budget)
     58     - [Controlling memory usage](@ref staying_within_budget_controlling_memory_usage)
     59   - \subpage resource_aliasing
     60   - \subpage custom_memory_pools
     61     - [Choosing memory type index](@ref custom_memory_pools_MemTypeIndex)
     62     - [When not to use custom pools](@ref custom_memory_pools_when_not_use)
     63     - [Linear allocation algorithm](@ref linear_algorithm)
     64       - [Free-at-once](@ref linear_algorithm_free_at_once)
     65       - [Stack](@ref linear_algorithm_stack)
     66       - [Double stack](@ref linear_algorithm_double_stack)
     67       - [Ring buffer](@ref linear_algorithm_ring_buffer)
     68   - \subpage defragmentation
     69   - \subpage statistics
     70     - [Numeric statistics](@ref statistics_numeric_statistics)
     71     - [JSON dump](@ref statistics_json_dump)
     72   - \subpage allocation_annotation
     73     - [Allocation user data](@ref allocation_user_data)
     74     - [Allocation names](@ref allocation_names)
     75   - \subpage virtual_allocator
     76   - \subpage debugging_memory_usage
     77     - [Memory initialization](@ref debugging_memory_usage_initialization)
     78     - [Margins](@ref debugging_memory_usage_margins)
     79     - [Corruption detection](@ref debugging_memory_usage_corruption_detection)
     80     - [Leak detection features](@ref debugging_memory_usage_leak_detection)
     81   - \subpage other_api_interop
     82 - \subpage usage_patterns
     83     - [GPU-only resource](@ref usage_patterns_gpu_only)
     84     - [Staging copy for upload](@ref usage_patterns_staging_copy_upload)
     85     - [Readback](@ref usage_patterns_readback)
     86     - [Advanced data uploading](@ref usage_patterns_advanced_data_uploading)
     87     - [Other use cases](@ref usage_patterns_other_use_cases)
     88 - \subpage configuration
     89   - [Pointers to Vulkan functions](@ref config_Vulkan_functions)
     90   - [Custom host memory allocator](@ref custom_memory_allocator)
     91   - [Device memory allocation callbacks](@ref allocation_callbacks)
     92   - [Device heap memory limit](@ref heap_memory_limit)
     93 - <b>Extension support</b>
     94     - \subpage vk_khr_dedicated_allocation
     95     - \subpage enabling_buffer_device_address
     96     - \subpage vk_ext_memory_priority
     97     - \subpage vk_amd_device_coherent_memory
     98 - \subpage general_considerations
     99   - [Thread safety](@ref general_considerations_thread_safety)
    100   - [Versioning and compatibility](@ref general_considerations_versioning_and_compatibility)
    101   - [Validation layer warnings](@ref general_considerations_validation_layer_warnings)
    102   - [Allocation algorithm](@ref general_considerations_allocation_algorithm)
    103   - [Features not supported](@ref general_considerations_features_not_supported)
    104 
    105 \defgroup group_init Library initialization
    106 
    107 \brief API elements related to the initialization and management of the entire library, especially #VmaAllocator object.
    108 
    109 \defgroup group_alloc Memory allocation
    110 
    111 \brief API elements related to the allocation, deallocation, and management of Vulkan memory, buffers, images.
    112 Most basic ones being: vmaCreateBuffer(), vmaCreateImage().
    113 
    114 \defgroup group_virtual Virtual allocator
    115 
    116 \brief API elements related to the mechanism of \ref virtual_allocator - using the core allocation algorithm
    117 for user-defined purpose without allocating any real GPU memory.
    118 
    119 \defgroup group_stats Statistics
    120 
    121 \brief API elements that query current status of the allocator, from memory usage, budget, to full dump of the internal state in JSON format.
    122 See documentation chapter: \ref statistics.
    123 */
    124 
    125 
    126 #ifdef __cplusplus
    127 extern "C" {
    128 #endif
    129 
    130 #if !defined(VULKAN_H_)
    131 #include <vulkan/vulkan.h>
    132 #endif
    133 
    134 #if !defined(VMA_VULKAN_VERSION)
    135     #if defined(VK_VERSION_1_3)
    136         #define VMA_VULKAN_VERSION 1003000
    137     #elif defined(VK_VERSION_1_2)
    138         #define VMA_VULKAN_VERSION 1002000
    139     #elif defined(VK_VERSION_1_1)
    140         #define VMA_VULKAN_VERSION 1001000
    141     #else
    142         #define VMA_VULKAN_VERSION 1000000
    143     #endif
    144 #endif
    145 
    146 #if defined(__ANDROID__) && defined(VK_NO_PROTOTYPES) && VMA_STATIC_VULKAN_FUNCTIONS
    147     extern PFN_vkGetInstanceProcAddr vkGetInstanceProcAddr;
    148     extern PFN_vkGetDeviceProcAddr vkGetDeviceProcAddr;
    149     extern PFN_vkGetPhysicalDeviceProperties vkGetPhysicalDeviceProperties;
    150     extern PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties;
    151     extern PFN_vkAllocateMemory vkAllocateMemory;
    152     extern PFN_vkFreeMemory vkFreeMemory;
    153     extern PFN_vkMapMemory vkMapMemory;
    154     extern PFN_vkUnmapMemory vkUnmapMemory;
    155     extern PFN_vkFlushMappedMemoryRanges vkFlushMappedMemoryRanges;
    156     extern PFN_vkInvalidateMappedMemoryRanges vkInvalidateMappedMemoryRanges;
    157     extern PFN_vkBindBufferMemory vkBindBufferMemory;
    158     extern PFN_vkBindImageMemory vkBindImageMemory;
    159     extern PFN_vkGetBufferMemoryRequirements vkGetBufferMemoryRequirements;
    160     extern PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements;
    161     extern PFN_vkCreateBuffer vkCreateBuffer;
    162     extern PFN_vkDestroyBuffer vkDestroyBuffer;
    163     extern PFN_vkCreateImage vkCreateImage;
    164     extern PFN_vkDestroyImage vkDestroyImage;
    165     extern PFN_vkCmdCopyBuffer vkCmdCopyBuffer;
    166     #if VMA_VULKAN_VERSION >= 1001000
    167         extern PFN_vkGetBufferMemoryRequirements2 vkGetBufferMemoryRequirements2;
    168         extern PFN_vkGetImageMemoryRequirements2 vkGetImageMemoryRequirements2;
    169         extern PFN_vkBindBufferMemory2 vkBindBufferMemory2;
    170         extern PFN_vkBindImageMemory2 vkBindImageMemory2;
    171         extern PFN_vkGetPhysicalDeviceMemoryProperties2 vkGetPhysicalDeviceMemoryProperties2;
    172     #endif // #if VMA_VULKAN_VERSION >= 1001000
    173 #endif // #if defined(__ANDROID__) && VMA_STATIC_VULKAN_FUNCTIONS && VK_NO_PROTOTYPES
    174 
    175 #if !defined(VMA_DEDICATED_ALLOCATION)
    176     #if VK_KHR_get_memory_requirements2 && VK_KHR_dedicated_allocation
    177         #define VMA_DEDICATED_ALLOCATION 1
    178     #else
    179         #define VMA_DEDICATED_ALLOCATION 0
    180     #endif
    181 #endif
    182 
    183 #if !defined(VMA_BIND_MEMORY2)
    184     #if VK_KHR_bind_memory2
    185         #define VMA_BIND_MEMORY2 1
    186     #else
    187         #define VMA_BIND_MEMORY2 0
    188     #endif
    189 #endif
    190 
    191 #if !defined(VMA_MEMORY_BUDGET)
    192     #if VK_EXT_memory_budget && (VK_KHR_get_physical_device_properties2 || VMA_VULKAN_VERSION >= 1001000)
    193         #define VMA_MEMORY_BUDGET 1
    194     #else
    195         #define VMA_MEMORY_BUDGET 0
    196     #endif
    197 #endif
    198 
    199 // Defined to 1 when VK_KHR_buffer_device_address device extension or equivalent core Vulkan 1.2 feature is defined in its headers.
    200 #if !defined(VMA_BUFFER_DEVICE_ADDRESS)
    201     #if VK_KHR_buffer_device_address || VMA_VULKAN_VERSION >= 1002000
    202         #define VMA_BUFFER_DEVICE_ADDRESS 1
    203     #else
    204         #define VMA_BUFFER_DEVICE_ADDRESS 0
    205     #endif
    206 #endif
    207 
    208 // Defined to 1 when VK_EXT_memory_priority device extension is defined in Vulkan headers.
    209 #if !defined(VMA_MEMORY_PRIORITY)
    210     #if VK_EXT_memory_priority
    211         #define VMA_MEMORY_PRIORITY 1
    212     #else
    213         #define VMA_MEMORY_PRIORITY 0
    214     #endif
    215 #endif
    216 
    217 // Defined to 1 when VK_KHR_maintenance4 device extension is defined in Vulkan headers.
    218 #if !defined(VMA_KHR_MAINTENANCE4)
    219     #if VK_KHR_maintenance4
    220         #define VMA_KHR_MAINTENANCE4 1
    221     #else
    222         #define VMA_KHR_MAINTENANCE4 0
    223     #endif
    224 #endif
    225 
    226 // Defined to 1 when VK_KHR_maintenance5 device extension is defined in Vulkan headers.
    227 #if !defined(VMA_KHR_MAINTENANCE5)
    228     #if VK_KHR_maintenance5
    229         #define VMA_KHR_MAINTENANCE5 1
    230     #else
    231         #define VMA_KHR_MAINTENANCE5 0
    232     #endif
    233 #endif
    234 
    235 
    236 // Defined to 1 when VK_KHR_external_memory device extension is defined in Vulkan headers.
    237 #if !defined(VMA_EXTERNAL_MEMORY)
    238     #if VK_KHR_external_memory
    239         #define VMA_EXTERNAL_MEMORY 1
    240     #else
    241         #define VMA_EXTERNAL_MEMORY 0
    242     #endif
    243 #endif
    244 
    245 // Define these macros to decorate all public functions with additional code,
    246 // before and after returned type, appropriately. This may be useful for
    247 // exporting the functions when compiling VMA as a separate library. Example:
    248 // #define VMA_CALL_PRE  __declspec(dllexport)
    249 // #define VMA_CALL_POST __cdecl
    250 #ifndef VMA_CALL_PRE
    251     #define VMA_CALL_PRE
    252 #endif
    253 #ifndef VMA_CALL_POST
    254     #define VMA_CALL_POST
    255 #endif
    256 
    257 // Define this macro to decorate pNext pointers with an attribute specifying the Vulkan
    258 // structure that will be extended via the pNext chain.
    259 #ifndef VMA_EXTENDS_VK_STRUCT
    260     #define VMA_EXTENDS_VK_STRUCT(vkStruct)
    261 #endif
    262 
    263 // Define this macro to decorate pointers with an attribute specifying the
    264 // length of the array they point to if they are not null.
    265 //
    266 // The length may be one of
    267 // - The name of another parameter in the argument list where the pointer is declared
    268 // - The name of another member in the struct where the pointer is declared
    269 // - The name of a member of a struct type, meaning the value of that member in
    270 //   the context of the call. For example
    271 //   VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryHeapCount"),
    272 //   this means the number of memory heaps available in the device associated
    273 //   with the VmaAllocator being dealt with.
    274 #ifndef VMA_LEN_IF_NOT_NULL
    275     #define VMA_LEN_IF_NOT_NULL(len)
    276 #endif
    277 
    278 // The VMA_NULLABLE macro is defined to be _Nullable when compiling with Clang.
    279 // see: https://clang.llvm.org/docs/AttributeReference.html#nullable
    280 #ifndef VMA_NULLABLE
    281     #ifdef __clang__
    282         #define VMA_NULLABLE _Nullable
    283     #else
    284         #define VMA_NULLABLE
    285     #endif
    286 #endif
    287 
    288 // The VMA_NOT_NULL macro is defined to be _Nonnull when compiling with Clang.
    289 // see: https://clang.llvm.org/docs/AttributeReference.html#nonnull
    290 #ifndef VMA_NOT_NULL
    291     #ifdef __clang__
    292         #define VMA_NOT_NULL _Nonnull
    293     #else
    294         #define VMA_NOT_NULL
    295     #endif
    296 #endif
    297 
    298 // If non-dispatchable handles are represented as pointers then we can give
    299 // then nullability annotations
    300 #ifndef VMA_NOT_NULL_NON_DISPATCHABLE
    301     #if defined(__LP64__) || defined(_WIN64) || (defined(__x86_64__) && !defined(__ILP32__) ) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
    302         #define VMA_NOT_NULL_NON_DISPATCHABLE VMA_NOT_NULL
    303     #else
    304         #define VMA_NOT_NULL_NON_DISPATCHABLE
    305     #endif
    306 #endif
    307 
    308 #ifndef VMA_NULLABLE_NON_DISPATCHABLE
    309     #if defined(__LP64__) || defined(_WIN64) || (defined(__x86_64__) && !defined(__ILP32__) ) || defined(_M_X64) || defined(__ia64) || defined (_M_IA64) || defined(__aarch64__) || defined(__powerpc64__)
    310         #define VMA_NULLABLE_NON_DISPATCHABLE VMA_NULLABLE
    311     #else
    312         #define VMA_NULLABLE_NON_DISPATCHABLE
    313     #endif
    314 #endif
    315 
    316 #ifndef VMA_STATS_STRING_ENABLED
    317     #define VMA_STATS_STRING_ENABLED 1
    318 #endif
    319 
    320 ////////////////////////////////////////////////////////////////////////////////
    321 ////////////////////////////////////////////////////////////////////////////////
    322 //
    323 //    INTERFACE
    324 //
    325 ////////////////////////////////////////////////////////////////////////////////
    326 ////////////////////////////////////////////////////////////////////////////////
    327 
    328 // Sections for managing code placement in file, only for development purposes e.g. for convenient folding inside an IDE.
    329 #ifndef _VMA_ENUM_DECLARATIONS
    330 
    331 /**
    332 \addtogroup group_init
    333 @{
    334 */
    335 
    336 /// Flags for created #VmaAllocator.
    337 typedef enum VmaAllocatorCreateFlagBits
    338 {
    339     /** \brief Allocator and all objects created from it will not be synchronized internally, so you must guarantee they are used from only one thread at a time or synchronized externally by you.
    340 
    341     Using this flag may increase performance because internal mutexes are not used.
    342     */
    343     VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT = 0x00000001,
    344     /** \brief Enables usage of VK_KHR_dedicated_allocation extension.
    345 
    346     The flag works only if VmaAllocatorCreateInfo::vulkanApiVersion `== VK_API_VERSION_1_0`.
    347     When it is `VK_API_VERSION_1_1`, the flag is ignored because the extension has been promoted to Vulkan 1.1.
    348 
    349     Using this extension will automatically allocate dedicated blocks of memory for
    350     some buffers and images instead of suballocating place for them out of bigger
    351     memory blocks (as if you explicitly used #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT
    352     flag) when it is recommended by the driver. It may improve performance on some
    353     GPUs.
    354 
    355     You may set this flag only if you found out that following device extensions are
    356     supported, you enabled them while creating Vulkan device passed as
    357     VmaAllocatorCreateInfo::device, and you want them to be used internally by this
    358     library:
    359 
    360     - VK_KHR_get_memory_requirements2 (device extension)
    361     - VK_KHR_dedicated_allocation (device extension)
    362 
    363     When this flag is set, you can experience following warnings reported by Vulkan
    364     validation layer. You can ignore them.
    365 
    366     > vkBindBufferMemory(): Binding memory to buffer 0x2d but vkGetBufferMemoryRequirements() has not been called on that buffer.
    367     */
    368     VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT = 0x00000002,
    369     /**
    370     Enables usage of VK_KHR_bind_memory2 extension.
    371 
    372     The flag works only if VmaAllocatorCreateInfo::vulkanApiVersion `== VK_API_VERSION_1_0`.
    373     When it is `VK_API_VERSION_1_1`, the flag is ignored because the extension has been promoted to Vulkan 1.1.
    374 
    375     You may set this flag only if you found out that this device extension is supported,
    376     you enabled it while creating Vulkan device passed as VmaAllocatorCreateInfo::device,
    377     and you want it to be used internally by this library.
    378 
    379     The extension provides functions `vkBindBufferMemory2KHR` and `vkBindImageMemory2KHR`,
    380     which allow to pass a chain of `pNext` structures while binding.
    381     This flag is required if you use `pNext` parameter in vmaBindBufferMemory2() or vmaBindImageMemory2().
    382     */
    383     VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT = 0x00000004,
    384     /**
    385     Enables usage of VK_EXT_memory_budget extension.
    386 
    387     You may set this flag only if you found out that this device extension is supported,
    388     you enabled it while creating Vulkan device passed as VmaAllocatorCreateInfo::device,
    389     and you want it to be used internally by this library, along with another instance extension
    390     VK_KHR_get_physical_device_properties2, which is required by it (or Vulkan 1.1, where this extension is promoted).
    391 
    392     The extension provides query for current memory usage and budget, which will probably
    393     be more accurate than an estimation used by the library otherwise.
    394     */
    395     VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT = 0x00000008,
    396     /**
    397     Enables usage of VK_AMD_device_coherent_memory extension.
    398 
    399     You may set this flag only if you:
    400 
    401     - found out that this device extension is supported and enabled it while creating Vulkan device passed as VmaAllocatorCreateInfo::device,
    402     - checked that `VkPhysicalDeviceCoherentMemoryFeaturesAMD::deviceCoherentMemory` is true and set it while creating the Vulkan device,
    403     - want it to be used internally by this library.
    404 
    405     The extension and accompanying device feature provide access to memory types with
    406     `VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD` and `VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD` flags.
    407     They are useful mostly for writing breadcrumb markers - a common method for debugging GPU crash/hang/TDR.
    408 
    409     When the extension is not enabled, such memory types are still enumerated, but their usage is illegal.
    410     To protect from this error, if you don't create the allocator with this flag, it will refuse to allocate any memory or create a custom pool in such memory type,
    411     returning `VK_ERROR_FEATURE_NOT_PRESENT`.
    412     */
    413     VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT = 0x00000010,
    414     /**
    415     Enables usage of "buffer device address" feature, which allows you to use function
    416     `vkGetBufferDeviceAddress*` to get raw GPU pointer to a buffer and pass it for usage inside a shader.
    417 
    418     You may set this flag only if you:
    419 
    420     1. (For Vulkan version < 1.2) Found as available and enabled device extension
    421     VK_KHR_buffer_device_address.
    422     This extension is promoted to core Vulkan 1.2.
    423     2. Found as available and enabled device feature `VkPhysicalDeviceBufferDeviceAddressFeatures::bufferDeviceAddress`.
    424 
    425     When this flag is set, you can create buffers with `VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT` using VMA.
    426     The library automatically adds `VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT` to
    427     allocated memory blocks wherever it might be needed.
    428 
    429     For more information, see documentation chapter \ref enabling_buffer_device_address.
    430     */
    431     VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT = 0x00000020,
    432     /**
    433     Enables usage of VK_EXT_memory_priority extension in the library.
    434 
    435     You may set this flag only if you found available and enabled this device extension,
    436     along with `VkPhysicalDeviceMemoryPriorityFeaturesEXT::memoryPriority == VK_TRUE`,
    437     while creating Vulkan device passed as VmaAllocatorCreateInfo::device.
    438 
    439     When this flag is used, VmaAllocationCreateInfo::priority and VmaPoolCreateInfo::priority
    440     are used to set priorities of allocated Vulkan memory. Without it, these variables are ignored.
    441 
    442     A priority must be a floating-point value between 0 and 1, indicating the priority of the allocation relative to other memory allocations.
    443     Larger values are higher priority. The granularity of the priorities is implementation-dependent.
    444     It is automatically passed to every call to `vkAllocateMemory` done by the library using structure `VkMemoryPriorityAllocateInfoEXT`.
    445     The value to be used for default priority is 0.5.
    446     For more details, see the documentation of the VK_EXT_memory_priority extension.
    447     */
    448     VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT = 0x00000040,
    449     /**
    450     Enables usage of VK_KHR_maintenance4 extension in the library.
    451 
    452     You may set this flag only if you found available and enabled this device extension,
    453     while creating Vulkan device passed as VmaAllocatorCreateInfo::device.
    454     */
    455     VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE4_BIT = 0x00000080,
    456     /**
    457     Enables usage of VK_KHR_maintenance5 extension in the library.
    458 
    459     You should set this flag if you found available and enabled this device extension,
    460     while creating Vulkan device passed as VmaAllocatorCreateInfo::device.
    461     */
    462     VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE5_BIT = 0x00000100,
    463 
    464     VMA_ALLOCATOR_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
    465 } VmaAllocatorCreateFlagBits;
    466 /// See #VmaAllocatorCreateFlagBits.
    467 typedef VkFlags VmaAllocatorCreateFlags;
    468 
    469 /** @} */
    470 
    471 /**
    472 \addtogroup group_alloc
    473 @{
    474 */
    475 
    476 /// \brief Intended usage of the allocated memory.
    477 typedef enum VmaMemoryUsage
    478 {
    479     /** No intended memory usage specified.
    480     Use other members of VmaAllocationCreateInfo to specify your requirements.
    481     */
    482     VMA_MEMORY_USAGE_UNKNOWN = 0,
    483     /**
    484     \deprecated Obsolete, preserved for backward compatibility.
    485     Prefers `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
    486     */
    487     VMA_MEMORY_USAGE_GPU_ONLY = 1,
    488     /**
    489     \deprecated Obsolete, preserved for backward compatibility.
    490     Guarantees `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT` and `VK_MEMORY_PROPERTY_HOST_COHERENT_BIT`.
    491     */
    492     VMA_MEMORY_USAGE_CPU_ONLY = 2,
    493     /**
    494     \deprecated Obsolete, preserved for backward compatibility.
    495     Guarantees `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`, prefers `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
    496     */
    497     VMA_MEMORY_USAGE_CPU_TO_GPU = 3,
    498     /**
    499     \deprecated Obsolete, preserved for backward compatibility.
    500     Guarantees `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`, prefers `VK_MEMORY_PROPERTY_HOST_CACHED_BIT`.
    501     */
    502     VMA_MEMORY_USAGE_GPU_TO_CPU = 4,
    503     /**
    504     \deprecated Obsolete, preserved for backward compatibility.
    505     Prefers not `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
    506     */
    507     VMA_MEMORY_USAGE_CPU_COPY = 5,
    508     /**
    509     Lazily allocated GPU memory having `VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT`.
    510     Exists mostly on mobile platforms. Using it on desktop PC or other GPUs with no such memory type present will fail the allocation.
    511 
    512     Usage: Memory for transient attachment images (color attachments, depth attachments etc.), created with `VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT`.
    513 
    514     Allocations with this usage are always created as dedicated - it implies #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
    515     */
    516     VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED = 6,
    517     /**
    518     Selects best memory type automatically.
    519     This flag is recommended for most common use cases.
    520 
    521     When using this flag, if you want to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT),
    522     you must pass one of the flags: #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
    523     in VmaAllocationCreateInfo::flags.
    524 
    525     It can be used only with functions that let the library know `VkBufferCreateInfo` or `VkImageCreateInfo`, e.g.
    526     vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo()
    527     and not with generic memory allocation functions.
    528     */
    529     VMA_MEMORY_USAGE_AUTO = 7,
    530     /**
    531     Selects best memory type automatically with preference for GPU (device) memory.
    532 
    533     When using this flag, if you want to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT),
    534     you must pass one of the flags: #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
    535     in VmaAllocationCreateInfo::flags.
    536 
    537     It can be used only with functions that let the library know `VkBufferCreateInfo` or `VkImageCreateInfo`, e.g.
    538     vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo()
    539     and not with generic memory allocation functions.
    540     */
    541     VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE = 8,
    542     /**
    543     Selects best memory type automatically with preference for CPU (host) memory.
    544 
    545     When using this flag, if you want to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT),
    546     you must pass one of the flags: #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
    547     in VmaAllocationCreateInfo::flags.
    548 
    549     It can be used only with functions that let the library know `VkBufferCreateInfo` or `VkImageCreateInfo`, e.g.
    550     vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo()
    551     and not with generic memory allocation functions.
    552     */
    553     VMA_MEMORY_USAGE_AUTO_PREFER_HOST = 9,
    554 
    555     VMA_MEMORY_USAGE_MAX_ENUM = 0x7FFFFFFF
    556 } VmaMemoryUsage;
    557 
    558 /// Flags to be passed as VmaAllocationCreateInfo::flags.
    559 typedef enum VmaAllocationCreateFlagBits
    560 {
    561     /** \brief Set this flag if the allocation should have its own memory block.
    562 
    563     Use it for special, big resources, like fullscreen images used as attachments.
    564 
    565     If you use this flag while creating a buffer or an image, `VkMemoryDedicatedAllocateInfo`
    566     structure is applied if possible.
    567     */
    568     VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT = 0x00000001,
    569 
    570     /** \brief Set this flag to only try to allocate from existing `VkDeviceMemory` blocks and never create new such block.
    571 
    572     If new allocation cannot be placed in any of the existing blocks, allocation
    573     fails with `VK_ERROR_OUT_OF_DEVICE_MEMORY` error.
    574 
    575     You should not use #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT and
    576     #VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT at the same time. It makes no sense.
    577     */
    578     VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT = 0x00000002,
    579     /** \brief Set this flag to use a memory that will be persistently mapped and retrieve pointer to it.
    580 
    581     Pointer to mapped memory will be returned through VmaAllocationInfo::pMappedData.
    582 
    583     It is valid to use this flag for allocation made from memory type that is not
    584     `HOST_VISIBLE`. This flag is then ignored and memory is not mapped. This is
    585     useful if you need an allocation that is efficient to use on GPU
    586     (`DEVICE_LOCAL`) and still want to map it directly if possible on platforms that
    587     support it (e.g. Intel GPU).
    588     */
    589     VMA_ALLOCATION_CREATE_MAPPED_BIT = 0x00000004,
    590     /** \deprecated Preserved for backward compatibility. Consider using vmaSetAllocationName() instead.
    591 
    592     Set this flag to treat VmaAllocationCreateInfo::pUserData as pointer to a
    593     null-terminated string. Instead of copying pointer value, a local copy of the
    594     string is made and stored in allocation's `pName`. The string is automatically
    595     freed together with the allocation. It is also used in vmaBuildStatsString().
    596     */
    597     VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT = 0x00000020,
    598     /** Allocation will be created from upper stack in a double stack pool.
    599 
    600     This flag is only allowed for custom pools created with #VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT flag.
    601     */
    602     VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT = 0x00000040,
    603     /** Create both buffer/image and allocation, but don't bind them together.
    604     It is useful when you want to bind yourself to do some more advanced binding, e.g. using some extensions.
    605     The flag is meaningful only with functions that bind by default: vmaCreateBuffer(), vmaCreateImage().
    606     Otherwise it is ignored.
    607 
    608     If you want to make sure the new buffer/image is not tied to the new memory allocation
    609     through `VkMemoryDedicatedAllocateInfoKHR` structure in case the allocation ends up in its own memory block,
    610     use also flag #VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT.
    611     */
    612     VMA_ALLOCATION_CREATE_DONT_BIND_BIT = 0x00000080,
    613     /** Create allocation only if additional device memory required for it, if any, won't exceed
    614     memory budget. Otherwise return `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
    615     */
    616     VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT = 0x00000100,
    617     /** \brief Set this flag if the allocated memory will have aliasing resources.
    618 
    619     Usage of this flag prevents supplying `VkMemoryDedicatedAllocateInfoKHR` when #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT is specified.
    620     Otherwise created dedicated memory will not be suitable for aliasing resources, resulting in Vulkan Validation Layer errors.
    621     */
    622     VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT = 0x00000200,
    623     /**
    624     Requests possibility to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT).
    625 
    626     - If you use #VMA_MEMORY_USAGE_AUTO or other `VMA_MEMORY_USAGE_AUTO*` value,
    627       you must use this flag to be able to map the allocation. Otherwise, mapping is incorrect.
    628     - If you use other value of #VmaMemoryUsage, this flag is ignored and mapping is always possible in memory types that are `HOST_VISIBLE`.
    629       This includes allocations created in \ref custom_memory_pools.
    630 
    631     Declares that mapped memory will only be written sequentially, e.g. using `memcpy()` or a loop writing number-by-number,
    632     never read or accessed randomly, so a memory type can be selected that is uncached and write-combined.
    633 
    634     \warning Violating this declaration may work correctly, but will likely be very slow.
    635     Watch out for implicit reads introduced by doing e.g. `pMappedData[i] += x;`
    636     Better prepare your data in a local variable and `memcpy()` it to the mapped pointer all at once.
    637     */
    638     VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT = 0x00000400,
    639     /**
    640     Requests possibility to map the allocation (using vmaMapMemory() or #VMA_ALLOCATION_CREATE_MAPPED_BIT).
    641 
    642     - If you use #VMA_MEMORY_USAGE_AUTO or other `VMA_MEMORY_USAGE_AUTO*` value,
    643       you must use this flag to be able to map the allocation. Otherwise, mapping is incorrect.
    644     - If you use other value of #VmaMemoryUsage, this flag is ignored and mapping is always possible in memory types that are `HOST_VISIBLE`.
    645       This includes allocations created in \ref custom_memory_pools.
    646 
    647     Declares that mapped memory can be read, written, and accessed in random order,
    648     so a `HOST_CACHED` memory type is preferred.
    649     */
    650     VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT = 0x00000800,
    651     /**
    652     Together with #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT,
    653     it says that despite request for host access, a not-`HOST_VISIBLE` memory type can be selected
    654     if it may improve performance.
    655 
    656     By using this flag, you declare that you will check if the allocation ended up in a `HOST_VISIBLE` memory type
    657     (e.g. using vmaGetAllocationMemoryProperties()) and if not, you will create some "staging" buffer and
    658     issue an explicit transfer to write/read your data.
    659     To prepare for this possibility, don't forget to add appropriate flags like
    660     `VK_BUFFER_USAGE_TRANSFER_DST_BIT`, `VK_BUFFER_USAGE_TRANSFER_SRC_BIT` to the parameters of created buffer or image.
    661     */
    662     VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT = 0x00001000,
    663     /** Allocation strategy that chooses smallest possible free range for the allocation
    664     to minimize memory usage and fragmentation, possibly at the expense of allocation time.
    665     */
    666     VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT = 0x00010000,
    667     /** Allocation strategy that chooses first suitable free range for the allocation -
    668     not necessarily in terms of the smallest offset but the one that is easiest and fastest to find
    669     to minimize allocation time, possibly at the expense of allocation quality.
    670     */
    671     VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT = 0x00020000,
    672     /** Allocation strategy that chooses always the lowest offset in available space.
    673     This is not the most efficient strategy but achieves highly packed data.
    674     Used internally by defragmentation, not recommended in typical usage.
    675     */
    676     VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT  = 0x00040000,
    677     /** Alias to #VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT.
    678     */
    679     VMA_ALLOCATION_CREATE_STRATEGY_BEST_FIT_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT,
    680     /** Alias to #VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT.
    681     */
    682     VMA_ALLOCATION_CREATE_STRATEGY_FIRST_FIT_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT,
    683     /** A bit mask to extract only `STRATEGY` bits from entire set of flags.
    684     */
    685     VMA_ALLOCATION_CREATE_STRATEGY_MASK =
    686         VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT |
    687         VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT |
    688         VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
    689 
    690     VMA_ALLOCATION_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
    691 } VmaAllocationCreateFlagBits;
    692 /// See #VmaAllocationCreateFlagBits.
    693 typedef VkFlags VmaAllocationCreateFlags;
    694 
    695 /// Flags to be passed as VmaPoolCreateInfo::flags.
    696 typedef enum VmaPoolCreateFlagBits
    697 {
    698     /** \brief Use this flag if you always allocate only buffers and linear images or only optimal images out of this pool and so Buffer-Image Granularity can be ignored.
    699 
    700     This is an optional optimization flag.
    701 
    702     If you always allocate using vmaCreateBuffer(), vmaCreateImage(),
    703     vmaAllocateMemoryForBuffer(), then you don't need to use it because allocator
    704     knows exact type of your allocations so it can handle Buffer-Image Granularity
    705     in the optimal way.
    706 
    707     If you also allocate using vmaAllocateMemoryForImage() or vmaAllocateMemory(),
    708     exact type of such allocations is not known, so allocator must be conservative
    709     in handling Buffer-Image Granularity, which can lead to suboptimal allocation
    710     (wasted memory). In that case, if you can make sure you always allocate only
    711     buffers and linear images or only optimal images out of this pool, use this flag
    712     to make allocator disregard Buffer-Image Granularity and so make allocations
    713     faster and more optimal.
    714     */
    715     VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT = 0x00000002,
    716 
    717     /** \brief Enables alternative, linear allocation algorithm in this pool.
    718 
    719     Specify this flag to enable linear allocation algorithm, which always creates
    720     new allocations after last one and doesn't reuse space from allocations freed in
    721     between. It trades memory consumption for simplified algorithm and data
    722     structure, which has better performance and uses less memory for metadata.
    723 
    724     By using this flag, you can achieve behavior of free-at-once, stack,
    725     ring buffer, and double stack.
    726     For details, see documentation chapter \ref linear_algorithm.
    727     */
    728     VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT = 0x00000004,
    729 
    730     /** Bit mask to extract only `ALGORITHM` bits from entire set of flags.
    731     */
    732     VMA_POOL_CREATE_ALGORITHM_MASK =
    733         VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT,
    734 
    735     VMA_POOL_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
    736 } VmaPoolCreateFlagBits;
    737 /// Flags to be passed as VmaPoolCreateInfo::flags. See #VmaPoolCreateFlagBits.
    738 typedef VkFlags VmaPoolCreateFlags;
    739 
    740 /// Flags to be passed as VmaDefragmentationInfo::flags.
    741 typedef enum VmaDefragmentationFlagBits
    742 {
    743     /* \brief Use simple but fast algorithm for defragmentation.
    744     May not achieve best results but will require least time to compute and least allocations to copy.
    745     */
    746     VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT = 0x1,
    747     /* \brief Default defragmentation algorithm, applied also when no `ALGORITHM` flag is specified.
    748     Offers a balance between defragmentation quality and the amount of allocations and bytes that need to be moved.
    749     */
    750     VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT = 0x2,
    751     /* \brief Perform full defragmentation of memory.
    752     Can result in notably more time to compute and allocations to copy, but will achieve best memory packing.
    753     */
    754     VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT = 0x4,
    755     /** \brief Use the most roboust algorithm at the cost of time to compute and number of copies to make.
    756     Only available when bufferImageGranularity is greater than 1, since it aims to reduce
    757     alignment issues between different types of resources.
    758     Otherwise falls back to same behavior as #VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT.
    759     */
    760     VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT = 0x8,
    761 
    762     /// A bit mask to extract only `ALGORITHM` bits from entire set of flags.
    763     VMA_DEFRAGMENTATION_FLAG_ALGORITHM_MASK =
    764         VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT |
    765         VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT |
    766         VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT |
    767         VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT,
    768 
    769     VMA_DEFRAGMENTATION_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
    770 } VmaDefragmentationFlagBits;
    771 /// See #VmaDefragmentationFlagBits.
    772 typedef VkFlags VmaDefragmentationFlags;
    773 
    774 /// Operation performed on single defragmentation move. See structure #VmaDefragmentationMove.
    775 typedef enum VmaDefragmentationMoveOperation
    776 {
    777     /// Buffer/image has been recreated at `dstTmpAllocation`, data has been copied, old buffer/image has been destroyed. `srcAllocation` should be changed to point to the new place. This is the default value set by vmaBeginDefragmentationPass().
    778     VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY = 0,
    779     /// Set this value if you cannot move the allocation. New place reserved at `dstTmpAllocation` will be freed. `srcAllocation` will remain unchanged.
    780     VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE = 1,
    781     /// Set this value if you decide to abandon the allocation and you destroyed the buffer/image. New place reserved at `dstTmpAllocation` will be freed, along with `srcAllocation`, which will be destroyed.
    782     VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY = 2,
    783 } VmaDefragmentationMoveOperation;
    784 
    785 /** @} */
    786 
    787 /**
    788 \addtogroup group_virtual
    789 @{
    790 */
    791 
    792 /// Flags to be passed as VmaVirtualBlockCreateInfo::flags.
    793 typedef enum VmaVirtualBlockCreateFlagBits
    794 {
    795     /** \brief Enables alternative, linear allocation algorithm in this virtual block.
    796 
    797     Specify this flag to enable linear allocation algorithm, which always creates
    798     new allocations after last one and doesn't reuse space from allocations freed in
    799     between. It trades memory consumption for simplified algorithm and data
    800     structure, which has better performance and uses less memory for metadata.
    801 
    802     By using this flag, you can achieve behavior of free-at-once, stack,
    803     ring buffer, and double stack.
    804     For details, see documentation chapter \ref linear_algorithm.
    805     */
    806     VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT = 0x00000001,
    807 
    808     /** \brief Bit mask to extract only `ALGORITHM` bits from entire set of flags.
    809     */
    810     VMA_VIRTUAL_BLOCK_CREATE_ALGORITHM_MASK =
    811         VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT,
    812 
    813     VMA_VIRTUAL_BLOCK_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
    814 } VmaVirtualBlockCreateFlagBits;
    815 /// Flags to be passed as VmaVirtualBlockCreateInfo::flags. See #VmaVirtualBlockCreateFlagBits.
    816 typedef VkFlags VmaVirtualBlockCreateFlags;
    817 
    818 /// Flags to be passed as VmaVirtualAllocationCreateInfo::flags.
    819 typedef enum VmaVirtualAllocationCreateFlagBits
    820 {
    821     /** \brief Allocation will be created from upper stack in a double stack pool.
    822 
    823     This flag is only allowed for virtual blocks created with #VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT flag.
    824     */
    825     VMA_VIRTUAL_ALLOCATION_CREATE_UPPER_ADDRESS_BIT = VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT,
    826     /** \brief Allocation strategy that tries to minimize memory usage.
    827     */
    828     VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT,
    829     /** \brief Allocation strategy that tries to minimize allocation time.
    830     */
    831     VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT,
    832     /** Allocation strategy that chooses always the lowest offset in available space.
    833     This is not the most efficient strategy but achieves highly packed data.
    834     */
    835     VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT = VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
    836     /** \brief A bit mask to extract only `STRATEGY` bits from entire set of flags.
    837 
    838     These strategy flags are binary compatible with equivalent flags in #VmaAllocationCreateFlagBits.
    839     */
    840     VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MASK = VMA_ALLOCATION_CREATE_STRATEGY_MASK,
    841 
    842     VMA_VIRTUAL_ALLOCATION_CREATE_FLAG_BITS_MAX_ENUM = 0x7FFFFFFF
    843 } VmaVirtualAllocationCreateFlagBits;
    844 /// Flags to be passed as VmaVirtualAllocationCreateInfo::flags. See #VmaVirtualAllocationCreateFlagBits.
    845 typedef VkFlags VmaVirtualAllocationCreateFlags;
    846 
    847 /** @} */
    848 
    849 #endif // _VMA_ENUM_DECLARATIONS
    850 
    851 #ifndef _VMA_DATA_TYPES_DECLARATIONS
    852 
    853 /**
    854 \addtogroup group_init
    855 @{ */
    856 
    857 /** \struct VmaAllocator
    858 \brief Represents main object of this library initialized.
    859 
    860 Fill structure #VmaAllocatorCreateInfo and call function vmaCreateAllocator() to create it.
    861 Call function vmaDestroyAllocator() to destroy it.
    862 
    863 It is recommended to create just one object of this type per `VkDevice` object,
    864 right after Vulkan is initialized and keep it alive until before Vulkan device is destroyed.
    865 */
    866 VK_DEFINE_HANDLE(VmaAllocator)
    867 
    868 /** @} */
    869 
    870 /**
    871 \addtogroup group_alloc
    872 @{
    873 */
    874 
    875 /** \struct VmaPool
    876 \brief Represents custom memory pool
    877 
    878 Fill structure VmaPoolCreateInfo and call function vmaCreatePool() to create it.
    879 Call function vmaDestroyPool() to destroy it.
    880 
    881 For more information see [Custom memory pools](@ref choosing_memory_type_custom_memory_pools).
    882 */
    883 VK_DEFINE_HANDLE(VmaPool)
    884 
    885 /** \struct VmaAllocation
    886 \brief Represents single memory allocation.
    887 
    888 It may be either dedicated block of `VkDeviceMemory` or a specific region of a bigger block of this type
    889 plus unique offset.
    890 
    891 There are multiple ways to create such object.
    892 You need to fill structure VmaAllocationCreateInfo.
    893 For more information see [Choosing memory type](@ref choosing_memory_type).
    894 
    895 Although the library provides convenience functions that create Vulkan buffer or image,
    896 allocate memory for it and bind them together,
    897 binding of the allocation to a buffer or an image is out of scope of the allocation itself.
    898 Allocation object can exist without buffer/image bound,
    899 binding can be done manually by the user, and destruction of it can be done
    900 independently of destruction of the allocation.
    901 
    902 The object also remembers its size and some other information.
    903 To retrieve this information, use function vmaGetAllocationInfo() and inspect
    904 returned structure VmaAllocationInfo.
    905 */
    906 VK_DEFINE_HANDLE(VmaAllocation)
    907 
    908 /** \struct VmaDefragmentationContext
    909 \brief An opaque object that represents started defragmentation process.
    910 
    911 Fill structure #VmaDefragmentationInfo and call function vmaBeginDefragmentation() to create it.
    912 Call function vmaEndDefragmentation() to destroy it.
    913 */
    914 VK_DEFINE_HANDLE(VmaDefragmentationContext)
    915 
    916 /** @} */
    917 
    918 /**
    919 \addtogroup group_virtual
    920 @{
    921 */
    922 
    923 /** \struct VmaVirtualAllocation
    924 \brief Represents single memory allocation done inside VmaVirtualBlock.
    925 
    926 Use it as a unique identifier to virtual allocation within the single block.
    927 
    928 Use value `VK_NULL_HANDLE` to represent a null/invalid allocation.
    929 */
    930 VK_DEFINE_NON_DISPATCHABLE_HANDLE(VmaVirtualAllocation)
    931 
    932 /** @} */
    933 
    934 /**
    935 \addtogroup group_virtual
    936 @{
    937 */
    938 
    939 /** \struct VmaVirtualBlock
    940 \brief Handle to a virtual block object that allows to use core allocation algorithm without allocating any real GPU memory.
    941 
    942 Fill in #VmaVirtualBlockCreateInfo structure and use vmaCreateVirtualBlock() to create it. Use vmaDestroyVirtualBlock() to destroy it.
    943 For more information, see documentation chapter \ref virtual_allocator.
    944 
    945 This object is not thread-safe - should not be used from multiple threads simultaneously, must be synchronized externally.
    946 */
    947 VK_DEFINE_HANDLE(VmaVirtualBlock)
    948 
    949 /** @} */
    950 
    951 /**
    952 \addtogroup group_init
    953 @{
    954 */
    955 
    956 /// Callback function called after successful vkAllocateMemory.
    957 typedef void (VKAPI_PTR* PFN_vmaAllocateDeviceMemoryFunction)(
    958     VmaAllocator VMA_NOT_NULL                    allocator,
    959     uint32_t                                     memoryType,
    960     VkDeviceMemory VMA_NOT_NULL_NON_DISPATCHABLE memory,
    961     VkDeviceSize                                 size,
    962     void* VMA_NULLABLE                           pUserData);
    963 
    964 /// Callback function called before vkFreeMemory.
    965 typedef void (VKAPI_PTR* PFN_vmaFreeDeviceMemoryFunction)(
    966     VmaAllocator VMA_NOT_NULL                    allocator,
    967     uint32_t                                     memoryType,
    968     VkDeviceMemory VMA_NOT_NULL_NON_DISPATCHABLE memory,
    969     VkDeviceSize                                 size,
    970     void* VMA_NULLABLE                           pUserData);
    971 
    972 /** \brief Set of callbacks that the library will call for `vkAllocateMemory` and `vkFreeMemory`.
    973 
    974 Provided for informative purpose, e.g. to gather statistics about number of
    975 allocations or total amount of memory allocated in Vulkan.
    976 
    977 Used in VmaAllocatorCreateInfo::pDeviceMemoryCallbacks.
    978 */
    979 typedef struct VmaDeviceMemoryCallbacks
    980 {
    981     /// Optional, can be null.
    982     PFN_vmaAllocateDeviceMemoryFunction VMA_NULLABLE pfnAllocate;
    983     /// Optional, can be null.
    984     PFN_vmaFreeDeviceMemoryFunction VMA_NULLABLE pfnFree;
    985     /// Optional, can be null.
    986     void* VMA_NULLABLE pUserData;
    987 } VmaDeviceMemoryCallbacks;
    988 
    989 /** \brief Pointers to some Vulkan functions - a subset used by the library.
    990 
    991 Used in VmaAllocatorCreateInfo::pVulkanFunctions.
    992 */
    993 typedef struct VmaVulkanFunctions
    994 {
    995     /// Required when using VMA_DYNAMIC_VULKAN_FUNCTIONS.
    996     PFN_vkGetInstanceProcAddr VMA_NULLABLE vkGetInstanceProcAddr;
    997     /// Required when using VMA_DYNAMIC_VULKAN_FUNCTIONS.
    998     PFN_vkGetDeviceProcAddr VMA_NULLABLE vkGetDeviceProcAddr;
    999     PFN_vkGetPhysicalDeviceProperties VMA_NULLABLE vkGetPhysicalDeviceProperties;
   1000     PFN_vkGetPhysicalDeviceMemoryProperties VMA_NULLABLE vkGetPhysicalDeviceMemoryProperties;
   1001     PFN_vkAllocateMemory VMA_NULLABLE vkAllocateMemory;
   1002     PFN_vkFreeMemory VMA_NULLABLE vkFreeMemory;
   1003     PFN_vkMapMemory VMA_NULLABLE vkMapMemory;
   1004     PFN_vkUnmapMemory VMA_NULLABLE vkUnmapMemory;
   1005     PFN_vkFlushMappedMemoryRanges VMA_NULLABLE vkFlushMappedMemoryRanges;
   1006     PFN_vkInvalidateMappedMemoryRanges VMA_NULLABLE vkInvalidateMappedMemoryRanges;
   1007     PFN_vkBindBufferMemory VMA_NULLABLE vkBindBufferMemory;
   1008     PFN_vkBindImageMemory VMA_NULLABLE vkBindImageMemory;
   1009     PFN_vkGetBufferMemoryRequirements VMA_NULLABLE vkGetBufferMemoryRequirements;
   1010     PFN_vkGetImageMemoryRequirements VMA_NULLABLE vkGetImageMemoryRequirements;
   1011     PFN_vkCreateBuffer VMA_NULLABLE vkCreateBuffer;
   1012     PFN_vkDestroyBuffer VMA_NULLABLE vkDestroyBuffer;
   1013     PFN_vkCreateImage VMA_NULLABLE vkCreateImage;
   1014     PFN_vkDestroyImage VMA_NULLABLE vkDestroyImage;
   1015     PFN_vkCmdCopyBuffer VMA_NULLABLE vkCmdCopyBuffer;
   1016 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
   1017     /// Fetch "vkGetBufferMemoryRequirements2" on Vulkan >= 1.1, fetch "vkGetBufferMemoryRequirements2KHR" when using VK_KHR_dedicated_allocation extension.
   1018     PFN_vkGetBufferMemoryRequirements2KHR VMA_NULLABLE vkGetBufferMemoryRequirements2KHR;
   1019     /// Fetch "vkGetImageMemoryRequirements2" on Vulkan >= 1.1, fetch "vkGetImageMemoryRequirements2KHR" when using VK_KHR_dedicated_allocation extension.
   1020     PFN_vkGetImageMemoryRequirements2KHR VMA_NULLABLE vkGetImageMemoryRequirements2KHR;
   1021 #endif
   1022 #if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
   1023     /// Fetch "vkBindBufferMemory2" on Vulkan >= 1.1, fetch "vkBindBufferMemory2KHR" when using VK_KHR_bind_memory2 extension.
   1024     PFN_vkBindBufferMemory2KHR VMA_NULLABLE vkBindBufferMemory2KHR;
   1025     /// Fetch "vkBindImageMemory2" on Vulkan >= 1.1, fetch "vkBindImageMemory2KHR" when using VK_KHR_bind_memory2 extension.
   1026     PFN_vkBindImageMemory2KHR VMA_NULLABLE vkBindImageMemory2KHR;
   1027 #endif
   1028 #if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
   1029     /// Fetch from "vkGetPhysicalDeviceMemoryProperties2" on Vulkan >= 1.1, but you can also fetch it from "vkGetPhysicalDeviceMemoryProperties2KHR" if you enabled extension VK_KHR_get_physical_device_properties2.
   1030     PFN_vkGetPhysicalDeviceMemoryProperties2KHR VMA_NULLABLE vkGetPhysicalDeviceMemoryProperties2KHR;
   1031 #endif
   1032 #if VMA_KHR_MAINTENANCE4 || VMA_VULKAN_VERSION >= 1003000
   1033     /// Fetch from "vkGetDeviceBufferMemoryRequirements" on Vulkan >= 1.3, but you can also fetch it from "vkGetDeviceBufferMemoryRequirementsKHR" if you enabled extension VK_KHR_maintenance4.
   1034     PFN_vkGetDeviceBufferMemoryRequirementsKHR VMA_NULLABLE vkGetDeviceBufferMemoryRequirements;
   1035     /// Fetch from "vkGetDeviceImageMemoryRequirements" on Vulkan >= 1.3, but you can also fetch it from "vkGetDeviceImageMemoryRequirementsKHR" if you enabled extension VK_KHR_maintenance4.
   1036     PFN_vkGetDeviceImageMemoryRequirementsKHR VMA_NULLABLE vkGetDeviceImageMemoryRequirements;
   1037 #endif
   1038 } VmaVulkanFunctions;
   1039 
   1040 /// Description of a Allocator to be created.
   1041 typedef struct VmaAllocatorCreateInfo
   1042 {
   1043     /// Flags for created allocator. Use #VmaAllocatorCreateFlagBits enum.
   1044     VmaAllocatorCreateFlags flags;
   1045     /// Vulkan physical device.
   1046     /** It must be valid throughout whole lifetime of created allocator. */
   1047     VkPhysicalDevice VMA_NOT_NULL physicalDevice;
   1048     /// Vulkan device.
   1049     /** It must be valid throughout whole lifetime of created allocator. */
   1050     VkDevice VMA_NOT_NULL device;
   1051     /// Preferred size of a single `VkDeviceMemory` block to be allocated from large heaps > 1 GiB. Optional.
   1052     /** Set to 0 to use default, which is currently 256 MiB. */
   1053     VkDeviceSize preferredLargeHeapBlockSize;
   1054     /// Custom CPU memory allocation callbacks. Optional.
   1055     /** Optional, can be null. When specified, will also be used for all CPU-side memory allocations. */
   1056     const VkAllocationCallbacks* VMA_NULLABLE pAllocationCallbacks;
   1057     /// Informative callbacks for `vkAllocateMemory`, `vkFreeMemory`. Optional.
   1058     /** Optional, can be null. */
   1059     const VmaDeviceMemoryCallbacks* VMA_NULLABLE pDeviceMemoryCallbacks;
   1060     /** \brief Either null or a pointer to an array of limits on maximum number of bytes that can be allocated out of particular Vulkan memory heap.
   1061 
   1062     If not NULL, it must be a pointer to an array of
   1063     `VkPhysicalDeviceMemoryProperties::memoryHeapCount` elements, defining limit on
   1064     maximum number of bytes that can be allocated out of particular Vulkan memory
   1065     heap.
   1066 
   1067     Any of the elements may be equal to `VK_WHOLE_SIZE`, which means no limit on that
   1068     heap. This is also the default in case of `pHeapSizeLimit` = NULL.
   1069 
   1070     If there is a limit defined for a heap:
   1071 
   1072     - If user tries to allocate more memory from that heap using this allocator,
   1073       the allocation fails with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
   1074     - If the limit is smaller than heap size reported in `VkMemoryHeap::size`, the
   1075       value of this limit will be reported instead when using vmaGetMemoryProperties().
   1076 
   1077     Warning! Using this feature may not be equivalent to installing a GPU with
   1078     smaller amount of memory, because graphics driver doesn't necessary fail new
   1079     allocations with `VK_ERROR_OUT_OF_DEVICE_MEMORY` result when memory capacity is
   1080     exceeded. It may return success and just silently migrate some device memory
   1081     blocks to system RAM. This driver behavior can also be controlled using
   1082     VK_AMD_memory_overallocation_behavior extension.
   1083     */
   1084     const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryHeapCount") pHeapSizeLimit;
   1085 
   1086     /** \brief Pointers to Vulkan functions. Can be null.
   1087 
   1088     For details see [Pointers to Vulkan functions](@ref config_Vulkan_functions).
   1089     */
   1090     const VmaVulkanFunctions* VMA_NULLABLE pVulkanFunctions;
   1091     /** \brief Handle to Vulkan instance object.
   1092 
   1093     Starting from version 3.0.0 this member is no longer optional, it must be set!
   1094     */
   1095     VkInstance VMA_NOT_NULL instance;
   1096     /** \brief Optional. Vulkan version that the application uses.
   1097 
   1098     It must be a value in the format as created by macro `VK_MAKE_VERSION` or a constant like: `VK_API_VERSION_1_1`, `VK_API_VERSION_1_0`.
   1099     The patch version number specified is ignored. Only the major and minor versions are considered.
   1100     Only versions 1.0, 1.1, 1.2, 1.3 are supported by the current implementation.
   1101     Leaving it initialized to zero is equivalent to `VK_API_VERSION_1_0`.
   1102     It must match the Vulkan version used by the application and supported on the selected physical device,
   1103     so it must be no higher than `VkApplicationInfo::apiVersion` passed to `vkCreateInstance`
   1104     and no higher than `VkPhysicalDeviceProperties::apiVersion` found on the physical device used.
   1105     */
   1106     uint32_t vulkanApiVersion;
   1107 #if VMA_EXTERNAL_MEMORY
   1108     /** \brief Either null or a pointer to an array of external memory handle types for each Vulkan memory type.
   1109 
   1110     If not NULL, it must be a pointer to an array of `VkPhysicalDeviceMemoryProperties::memoryTypeCount`
   1111     elements, defining external memory handle types of particular Vulkan memory type,
   1112     to be passed using `VkExportMemoryAllocateInfoKHR`.
   1113 
   1114     Any of the elements may be equal to 0, which means not to use `VkExportMemoryAllocateInfoKHR` on this memory type.
   1115     This is also the default in case of `pTypeExternalMemoryHandleTypes` = NULL.
   1116     */
   1117     const VkExternalMemoryHandleTypeFlagsKHR* VMA_NULLABLE VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryTypeCount") pTypeExternalMemoryHandleTypes;
   1118 #endif // #if VMA_EXTERNAL_MEMORY
   1119 } VmaAllocatorCreateInfo;
   1120 
   1121 /// Information about existing #VmaAllocator object.
   1122 typedef struct VmaAllocatorInfo
   1123 {
   1124     /** \brief Handle to Vulkan instance object.
   1125 
   1126     This is the same value as has been passed through VmaAllocatorCreateInfo::instance.
   1127     */
   1128     VkInstance VMA_NOT_NULL instance;
   1129     /** \brief Handle to Vulkan physical device object.
   1130 
   1131     This is the same value as has been passed through VmaAllocatorCreateInfo::physicalDevice.
   1132     */
   1133     VkPhysicalDevice VMA_NOT_NULL physicalDevice;
   1134     /** \brief Handle to Vulkan device object.
   1135 
   1136     This is the same value as has been passed through VmaAllocatorCreateInfo::device.
   1137     */
   1138     VkDevice VMA_NOT_NULL device;
   1139 } VmaAllocatorInfo;
   1140 
   1141 /** @} */
   1142 
   1143 /**
   1144 \addtogroup group_stats
   1145 @{
   1146 */
   1147 
   1148 /** \brief Calculated statistics of memory usage e.g. in a specific memory type, heap, custom pool, or total.
   1149 
   1150 These are fast to calculate.
   1151 See functions: vmaGetHeapBudgets(), vmaGetPoolStatistics().
   1152 */
   1153 typedef struct VmaStatistics
   1154 {
   1155     /** \brief Number of `VkDeviceMemory` objects - Vulkan memory blocks allocated.
   1156     */
   1157     uint32_t blockCount;
   1158     /** \brief Number of #VmaAllocation objects allocated.
   1159 
   1160     Dedicated allocations have their own blocks, so each one adds 1 to `allocationCount` as well as `blockCount`.
   1161     */
   1162     uint32_t allocationCount;
   1163     /** \brief Number of bytes allocated in `VkDeviceMemory` blocks.
   1164 
   1165     \note To avoid confusion, please be aware that what Vulkan calls an "allocation" - a whole `VkDeviceMemory` object
   1166     (e.g. as in `VkPhysicalDeviceLimits::maxMemoryAllocationCount`) is called a "block" in VMA, while VMA calls
   1167     "allocation" a #VmaAllocation object that represents a memory region sub-allocated from such block, usually for a single buffer or image.
   1168     */
   1169     VkDeviceSize blockBytes;
   1170     /** \brief Total number of bytes occupied by all #VmaAllocation objects.
   1171 
   1172     Always less or equal than `blockBytes`.
   1173     Difference `(blockBytes - allocationBytes)` is the amount of memory allocated from Vulkan
   1174     but unused by any #VmaAllocation.
   1175     */
   1176     VkDeviceSize allocationBytes;
   1177 } VmaStatistics;
   1178 
   1179 /** \brief More detailed statistics than #VmaStatistics.
   1180 
   1181 These are slower to calculate. Use for debugging purposes.
   1182 See functions: vmaCalculateStatistics(), vmaCalculatePoolStatistics().
   1183 
   1184 Previous version of the statistics API provided averages, but they have been removed
   1185 because they can be easily calculated as:
   1186 
   1187 \code
   1188 VkDeviceSize allocationSizeAvg = detailedStats.statistics.allocationBytes / detailedStats.statistics.allocationCount;
   1189 VkDeviceSize unusedBytes = detailedStats.statistics.blockBytes - detailedStats.statistics.allocationBytes;
   1190 VkDeviceSize unusedRangeSizeAvg = unusedBytes / detailedStats.unusedRangeCount;
   1191 \endcode
   1192 */
   1193 typedef struct VmaDetailedStatistics
   1194 {
   1195     /// Basic statistics.
   1196     VmaStatistics statistics;
   1197     /// Number of free ranges of memory between allocations.
   1198     uint32_t unusedRangeCount;
   1199     /// Smallest allocation size. `VK_WHOLE_SIZE` if there are 0 allocations.
   1200     VkDeviceSize allocationSizeMin;
   1201     /// Largest allocation size. 0 if there are 0 allocations.
   1202     VkDeviceSize allocationSizeMax;
   1203     /// Smallest empty range size. `VK_WHOLE_SIZE` if there are 0 empty ranges.
   1204     VkDeviceSize unusedRangeSizeMin;
   1205     /// Largest empty range size. 0 if there are 0 empty ranges.
   1206     VkDeviceSize unusedRangeSizeMax;
   1207 } VmaDetailedStatistics;
   1208 
   1209 /** \brief  General statistics from current state of the Allocator -
   1210 total memory usage across all memory heaps and types.
   1211 
   1212 These are slower to calculate. Use for debugging purposes.
   1213 See function vmaCalculateStatistics().
   1214 */
   1215 typedef struct VmaTotalStatistics
   1216 {
   1217     VmaDetailedStatistics memoryType[VK_MAX_MEMORY_TYPES];
   1218     VmaDetailedStatistics memoryHeap[VK_MAX_MEMORY_HEAPS];
   1219     VmaDetailedStatistics total;
   1220 } VmaTotalStatistics;
   1221 
   1222 /** \brief Statistics of current memory usage and available budget for a specific memory heap.
   1223 
   1224 These are fast to calculate.
   1225 See function vmaGetHeapBudgets().
   1226 */
   1227 typedef struct VmaBudget
   1228 {
   1229     /** \brief Statistics fetched from the library.
   1230     */
   1231     VmaStatistics statistics;
   1232     /** \brief Estimated current memory usage of the program, in bytes.
   1233 
   1234     Fetched from system using VK_EXT_memory_budget extension if enabled.
   1235 
   1236     It might be different than `statistics.blockBytes` (usually higher) due to additional implicit objects
   1237     also occupying the memory, like swapchain, pipelines, descriptor heaps, command buffers, or
   1238     `VkDeviceMemory` blocks allocated outside of this library, if any.
   1239     */
   1240     VkDeviceSize usage;
   1241     /** \brief Estimated amount of memory available to the program, in bytes.
   1242 
   1243     Fetched from system using VK_EXT_memory_budget extension if enabled.
   1244 
   1245     It might be different (most probably smaller) than `VkMemoryHeap::size[heapIndex]` due to factors
   1246     external to the program, decided by the operating system.
   1247     Difference `budget - usage` is the amount of additional memory that can probably
   1248     be allocated without problems. Exceeding the budget may result in various problems.
   1249     */
   1250     VkDeviceSize budget;
   1251 } VmaBudget;
   1252 
   1253 /** @} */
   1254 
   1255 /**
   1256 \addtogroup group_alloc
   1257 @{
   1258 */
   1259 
   1260 /** \brief Parameters of new #VmaAllocation.
   1261 
   1262 To be used with functions like vmaCreateBuffer(), vmaCreateImage(), and many others.
   1263 */
   1264 typedef struct VmaAllocationCreateInfo
   1265 {
   1266     /// Use #VmaAllocationCreateFlagBits enum.
   1267     VmaAllocationCreateFlags flags;
   1268     /** \brief Intended usage of memory.
   1269 
   1270     You can leave #VMA_MEMORY_USAGE_UNKNOWN if you specify memory requirements in other way. \n
   1271     If `pool` is not null, this member is ignored.
   1272     */
   1273     VmaMemoryUsage usage;
   1274     /** \brief Flags that must be set in a Memory Type chosen for an allocation.
   1275 
   1276     Leave 0 if you specify memory requirements in other way. \n
   1277     If `pool` is not null, this member is ignored.*/
   1278     VkMemoryPropertyFlags requiredFlags;
   1279     /** \brief Flags that preferably should be set in a memory type chosen for an allocation.
   1280 
   1281     Set to 0 if no additional flags are preferred. \n
   1282     If `pool` is not null, this member is ignored. */
   1283     VkMemoryPropertyFlags preferredFlags;
   1284     /** \brief Bitmask containing one bit set for every memory type acceptable for this allocation.
   1285 
   1286     Value 0 is equivalent to `UINT32_MAX` - it means any memory type is accepted if
   1287     it meets other requirements specified by this structure, with no further
   1288     restrictions on memory type index. \n
   1289     If `pool` is not null, this member is ignored.
   1290     */
   1291     uint32_t memoryTypeBits;
   1292     /** \brief Pool that this allocation should be created in.
   1293 
   1294     Leave `VK_NULL_HANDLE` to allocate from default pool. If not null, members:
   1295     `usage`, `requiredFlags`, `preferredFlags`, `memoryTypeBits` are ignored.
   1296     */
   1297     VmaPool VMA_NULLABLE pool;
   1298     /** \brief Custom general-purpose pointer that will be stored in #VmaAllocation, can be read as VmaAllocationInfo::pUserData and changed using vmaSetAllocationUserData().
   1299 
   1300     If #VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT is used, it must be either
   1301     null or pointer to a null-terminated string. The string will be then copied to
   1302     internal buffer, so it doesn't need to be valid after allocation call.
   1303     */
   1304     void* VMA_NULLABLE pUserData;
   1305     /** \brief A floating-point value between 0 and 1, indicating the priority of the allocation relative to other memory allocations.
   1306 
   1307     It is used only when #VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT flag was used during creation of the #VmaAllocator object
   1308     and this allocation ends up as dedicated or is explicitly forced as dedicated using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
   1309     Otherwise, it has the priority of a memory block where it is placed and this variable is ignored.
   1310     */
   1311     float priority;
   1312 } VmaAllocationCreateInfo;
   1313 
   1314 /// Describes parameter of created #VmaPool.
   1315 typedef struct VmaPoolCreateInfo
   1316 {
   1317     /** \brief Vulkan memory type index to allocate this pool from.
   1318     */
   1319     uint32_t memoryTypeIndex;
   1320     /** \brief Use combination of #VmaPoolCreateFlagBits.
   1321     */
   1322     VmaPoolCreateFlags flags;
   1323     /** \brief Size of a single `VkDeviceMemory` block to be allocated as part of this pool, in bytes. Optional.
   1324 
   1325     Specify nonzero to set explicit, constant size of memory blocks used by this
   1326     pool.
   1327 
   1328     Leave 0 to use default and let the library manage block sizes automatically.
   1329     Sizes of particular blocks may vary.
   1330     In this case, the pool will also support dedicated allocations.
   1331     */
   1332     VkDeviceSize blockSize;
   1333     /** \brief Minimum number of blocks to be always allocated in this pool, even if they stay empty.
   1334 
   1335     Set to 0 to have no preallocated blocks and allow the pool be completely empty.
   1336     */
   1337     size_t minBlockCount;
   1338     /** \brief Maximum number of blocks that can be allocated in this pool. Optional.
   1339 
   1340     Set to 0 to use default, which is `SIZE_MAX`, which means no limit.
   1341 
   1342     Set to same value as VmaPoolCreateInfo::minBlockCount to have fixed amount of memory allocated
   1343     throughout whole lifetime of this pool.
   1344     */
   1345     size_t maxBlockCount;
   1346     /** \brief A floating-point value between 0 and 1, indicating the priority of the allocations in this pool relative to other memory allocations.
   1347 
   1348     It is used only when #VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT flag was used during creation of the #VmaAllocator object.
   1349     Otherwise, this variable is ignored.
   1350     */
   1351     float priority;
   1352     /** \brief Additional minimum alignment to be used for all allocations created from this pool. Can be 0.
   1353 
   1354     Leave 0 (default) not to impose any additional alignment. If not 0, it must be a power of two.
   1355     It can be useful in cases where alignment returned by Vulkan by functions like `vkGetBufferMemoryRequirements` is not enough,
   1356     e.g. when doing interop with OpenGL.
   1357     */
   1358     VkDeviceSize minAllocationAlignment;
   1359     /** \brief Additional `pNext` chain to be attached to `VkMemoryAllocateInfo` used for every allocation made by this pool. Optional.
   1360 
   1361     Optional, can be null. If not null, it must point to a `pNext` chain of structures that can be attached to `VkMemoryAllocateInfo`.
   1362     It can be useful for special needs such as adding `VkExportMemoryAllocateInfoKHR`.
   1363     Structures pointed by this member must remain alive and unchanged for the whole lifetime of the custom pool.
   1364 
   1365     Please note that some structures, e.g. `VkMemoryPriorityAllocateInfoEXT`, `VkMemoryDedicatedAllocateInfoKHR`,
   1366     can be attached automatically by this library when using other, more convenient of its features.
   1367     */
   1368     void* VMA_NULLABLE VMA_EXTENDS_VK_STRUCT(VkMemoryAllocateInfo) pMemoryAllocateNext;
   1369 } VmaPoolCreateInfo;
   1370 
   1371 /** @} */
   1372 
   1373 /**
   1374 \addtogroup group_alloc
   1375 @{
   1376 */
   1377 
   1378 /**
   1379 Parameters of #VmaAllocation objects, that can be retrieved using function vmaGetAllocationInfo().
   1380 
   1381 There is also an extended version of this structure that carries additional parameters: #VmaAllocationInfo2.
   1382 */
   1383 typedef struct VmaAllocationInfo
   1384 {
   1385     /** \brief Memory type index that this allocation was allocated from.
   1386 
   1387     It never changes.
   1388     */
   1389     uint32_t memoryType;
   1390     /** \brief Handle to Vulkan memory object.
   1391 
   1392     Same memory object can be shared by multiple allocations.
   1393 
   1394     It can change after the allocation is moved during \ref defragmentation.
   1395     */
   1396     VkDeviceMemory VMA_NULLABLE_NON_DISPATCHABLE deviceMemory;
   1397     /** \brief Offset in `VkDeviceMemory` object to the beginning of this allocation, in bytes. `(deviceMemory, offset)` pair is unique to this allocation.
   1398 
   1399     You usually don't need to use this offset. If you create a buffer or an image together with the allocation using e.g. function
   1400     vmaCreateBuffer(), vmaCreateImage(), functions that operate on these resources refer to the beginning of the buffer or image,
   1401     not entire device memory block. Functions like vmaMapMemory(), vmaBindBufferMemory() also refer to the beginning of the allocation
   1402     and apply this offset automatically.
   1403 
   1404     It can change after the allocation is moved during \ref defragmentation.
   1405     */
   1406     VkDeviceSize offset;
   1407     /** \brief Size of this allocation, in bytes.
   1408 
   1409     It never changes.
   1410 
   1411     \note Allocation size returned in this variable may be greater than the size
   1412     requested for the resource e.g. as `VkBufferCreateInfo::size`. Whole size of the
   1413     allocation is accessible for operations on memory e.g. using a pointer after
   1414     mapping with vmaMapMemory(), but operations on the resource e.g. using
   1415     `vkCmdCopyBuffer` must be limited to the size of the resource.
   1416     */
   1417     VkDeviceSize size;
   1418     /** \brief Pointer to the beginning of this allocation as mapped data.
   1419 
   1420     If the allocation hasn't been mapped using vmaMapMemory() and hasn't been
   1421     created with #VMA_ALLOCATION_CREATE_MAPPED_BIT flag, this value is null.
   1422 
   1423     It can change after call to vmaMapMemory(), vmaUnmapMemory().
   1424     It can also change after the allocation is moved during \ref defragmentation.
   1425     */
   1426     void* VMA_NULLABLE pMappedData;
   1427     /** \brief Custom general-purpose pointer that was passed as VmaAllocationCreateInfo::pUserData or set using vmaSetAllocationUserData().
   1428 
   1429     It can change after call to vmaSetAllocationUserData() for this allocation.
   1430     */
   1431     void* VMA_NULLABLE pUserData;
   1432     /** \brief Custom allocation name that was set with vmaSetAllocationName().
   1433 
   1434     It can change after call to vmaSetAllocationName() for this allocation.
   1435 
   1436     Another way to set custom name is to pass it in VmaAllocationCreateInfo::pUserData with
   1437     additional flag #VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT set [DEPRECATED].
   1438     */
   1439     const char* VMA_NULLABLE pName;
   1440 } VmaAllocationInfo;
   1441 
   1442 /// Extended parameters of a #VmaAllocation object that can be retrieved using function vmaGetAllocationInfo2().
   1443 typedef struct VmaAllocationInfo2
   1444 {
   1445     /** \brief Basic parameters of the allocation.
   1446     
   1447     If you need only these, you can use function vmaGetAllocationInfo() and structure #VmaAllocationInfo instead.
   1448     */
   1449     VmaAllocationInfo allocationInfo;
   1450     /** \brief Size of the `VkDeviceMemory` block that the allocation belongs to.
   1451     
   1452     In case of an allocation with dedicated memory, it will be equal to `allocationInfo.size`.
   1453     */
   1454     VkDeviceSize blockSize;
   1455     /** \brief `VK_TRUE` if the allocation has dedicated memory, `VK_FALSE` if it was placed as part of a larger memory block.
   1456     
   1457     When `VK_TRUE`, it also means `VkMemoryDedicatedAllocateInfo` was used when creating the allocation
   1458     (if VK_KHR_dedicated_allocation extension or Vulkan version >= 1.1 is enabled).
   1459     */
   1460     VkBool32 dedicatedMemory;
   1461 } VmaAllocationInfo2;
   1462 
   1463 /** Callback function called during vmaBeginDefragmentation() to check custom criterion about ending current defragmentation pass.
   1464 
   1465 Should return true if the defragmentation needs to stop current pass.
   1466 */
   1467 typedef VkBool32 (VKAPI_PTR* PFN_vmaCheckDefragmentationBreakFunction)(void* VMA_NULLABLE pUserData);
   1468 
   1469 /** \brief Parameters for defragmentation.
   1470 
   1471 To be used with function vmaBeginDefragmentation().
   1472 */
   1473 typedef struct VmaDefragmentationInfo
   1474 {
   1475     /// \brief Use combination of #VmaDefragmentationFlagBits.
   1476     VmaDefragmentationFlags flags;
   1477     /** \brief Custom pool to be defragmented.
   1478 
   1479     If null then default pools will undergo defragmentation process.
   1480     */
   1481     VmaPool VMA_NULLABLE pool;
   1482     /** \brief Maximum numbers of bytes that can be copied during single pass, while moving allocations to different places.
   1483 
   1484     `0` means no limit.
   1485     */
   1486     VkDeviceSize maxBytesPerPass;
   1487     /** \brief Maximum number of allocations that can be moved during single pass to a different place.
   1488 
   1489     `0` means no limit.
   1490     */
   1491     uint32_t maxAllocationsPerPass;
   1492     /** \brief Optional custom callback for stopping vmaBeginDefragmentation().
   1493 
   1494     Have to return true for breaking current defragmentation pass.
   1495     */
   1496     PFN_vmaCheckDefragmentationBreakFunction VMA_NULLABLE pfnBreakCallback;
   1497     /// \brief Optional data to pass to custom callback for stopping pass of defragmentation.
   1498     void* VMA_NULLABLE pBreakCallbackUserData;
   1499 } VmaDefragmentationInfo;
   1500 
   1501 /// Single move of an allocation to be done for defragmentation.
   1502 typedef struct VmaDefragmentationMove
   1503 {
   1504     /// Operation to be performed on the allocation by vmaEndDefragmentationPass(). Default value is #VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY. You can modify it.
   1505     VmaDefragmentationMoveOperation operation;
   1506     /// Allocation that should be moved.
   1507     VmaAllocation VMA_NOT_NULL srcAllocation;
   1508     /** \brief Temporary allocation pointing to destination memory that will replace `srcAllocation`.
   1509 
   1510     \warning Do not store this allocation in your data structures! It exists only temporarily, for the duration of the defragmentation pass,
   1511     to be used for binding new buffer/image to the destination memory using e.g. vmaBindBufferMemory().
   1512     vmaEndDefragmentationPass() will destroy it and make `srcAllocation` point to this memory.
   1513     */
   1514     VmaAllocation VMA_NOT_NULL dstTmpAllocation;
   1515 } VmaDefragmentationMove;
   1516 
   1517 /** \brief Parameters for incremental defragmentation steps.
   1518 
   1519 To be used with function vmaBeginDefragmentationPass().
   1520 */
   1521 typedef struct VmaDefragmentationPassMoveInfo
   1522 {
   1523     /// Number of elements in the `pMoves` array.
   1524     uint32_t moveCount;
   1525     /** \brief Array of moves to be performed by the user in the current defragmentation pass.
   1526 
   1527     Pointer to an array of `moveCount` elements, owned by VMA, created in vmaBeginDefragmentationPass(), destroyed in vmaEndDefragmentationPass().
   1528 
   1529     For each element, you should:
   1530 
   1531     1. Create a new buffer/image in the place pointed by VmaDefragmentationMove::dstMemory + VmaDefragmentationMove::dstOffset.
   1532     2. Copy data from the VmaDefragmentationMove::srcAllocation e.g. using `vkCmdCopyBuffer`, `vkCmdCopyImage`.
   1533     3. Make sure these commands finished executing on the GPU.
   1534     4. Destroy the old buffer/image.
   1535 
   1536     Only then you can finish defragmentation pass by calling vmaEndDefragmentationPass().
   1537     After this call, the allocation will point to the new place in memory.
   1538 
   1539     Alternatively, if you cannot move specific allocation, you can set VmaDefragmentationMove::operation to #VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE.
   1540 
   1541     Alternatively, if you decide you want to completely remove the allocation:
   1542 
   1543     1. Destroy its buffer/image.
   1544     2. Set VmaDefragmentationMove::operation to #VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY.
   1545 
   1546     Then, after vmaEndDefragmentationPass() the allocation will be freed.
   1547     */
   1548     VmaDefragmentationMove* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(moveCount) pMoves;
   1549 } VmaDefragmentationPassMoveInfo;
   1550 
   1551 /// Statistics returned for defragmentation process in function vmaEndDefragmentation().
   1552 typedef struct VmaDefragmentationStats
   1553 {
   1554     /// Total number of bytes that have been copied while moving allocations to different places.
   1555     VkDeviceSize bytesMoved;
   1556     /// Total number of bytes that have been released to the system by freeing empty `VkDeviceMemory` objects.
   1557     VkDeviceSize bytesFreed;
   1558     /// Number of allocations that have been moved to different places.
   1559     uint32_t allocationsMoved;
   1560     /// Number of empty `VkDeviceMemory` objects that have been released to the system.
   1561     uint32_t deviceMemoryBlocksFreed;
   1562 } VmaDefragmentationStats;
   1563 
   1564 /** @} */
   1565 
   1566 /**
   1567 \addtogroup group_virtual
   1568 @{
   1569 */
   1570 
   1571 /// Parameters of created #VmaVirtualBlock object to be passed to vmaCreateVirtualBlock().
   1572 typedef struct VmaVirtualBlockCreateInfo
   1573 {
   1574     /** \brief Total size of the virtual block.
   1575 
   1576     Sizes can be expressed in bytes or any units you want as long as you are consistent in using them.
   1577     For example, if you allocate from some array of structures, 1 can mean single instance of entire structure.
   1578     */
   1579     VkDeviceSize size;
   1580 
   1581     /** \brief Use combination of #VmaVirtualBlockCreateFlagBits.
   1582     */
   1583     VmaVirtualBlockCreateFlags flags;
   1584 
   1585     /** \brief Custom CPU memory allocation callbacks. Optional.
   1586 
   1587     Optional, can be null. When specified, they will be used for all CPU-side memory allocations.
   1588     */
   1589     const VkAllocationCallbacks* VMA_NULLABLE pAllocationCallbacks;
   1590 } VmaVirtualBlockCreateInfo;
   1591 
   1592 /// Parameters of created virtual allocation to be passed to vmaVirtualAllocate().
   1593 typedef struct VmaVirtualAllocationCreateInfo
   1594 {
   1595     /** \brief Size of the allocation.
   1596 
   1597     Cannot be zero.
   1598     */
   1599     VkDeviceSize size;
   1600     /** \brief Required alignment of the allocation. Optional.
   1601 
   1602     Must be power of two. Special value 0 has the same meaning as 1 - means no special alignment is required, so allocation can start at any offset.
   1603     */
   1604     VkDeviceSize alignment;
   1605     /** \brief Use combination of #VmaVirtualAllocationCreateFlagBits.
   1606     */
   1607     VmaVirtualAllocationCreateFlags flags;
   1608     /** \brief Custom pointer to be associated with the allocation. Optional.
   1609 
   1610     It can be any value and can be used for user-defined purposes. It can be fetched or changed later.
   1611     */
   1612     void* VMA_NULLABLE pUserData;
   1613 } VmaVirtualAllocationCreateInfo;
   1614 
   1615 /// Parameters of an existing virtual allocation, returned by vmaGetVirtualAllocationInfo().
   1616 typedef struct VmaVirtualAllocationInfo
   1617 {
   1618     /** \brief Offset of the allocation.
   1619 
   1620     Offset at which the allocation was made.
   1621     */
   1622     VkDeviceSize offset;
   1623     /** \brief Size of the allocation.
   1624 
   1625     Same value as passed in VmaVirtualAllocationCreateInfo::size.
   1626     */
   1627     VkDeviceSize size;
   1628     /** \brief Custom pointer associated with the allocation.
   1629 
   1630     Same value as passed in VmaVirtualAllocationCreateInfo::pUserData or to vmaSetVirtualAllocationUserData().
   1631     */
   1632     void* VMA_NULLABLE pUserData;
   1633 } VmaVirtualAllocationInfo;
   1634 
   1635 /** @} */
   1636 
   1637 #endif // _VMA_DATA_TYPES_DECLARATIONS
   1638 
   1639 #ifndef _VMA_FUNCTION_HEADERS
   1640 
   1641 /**
   1642 \addtogroup group_init
   1643 @{
   1644 */
   1645 
   1646 /// Creates #VmaAllocator object.
   1647 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAllocator(
   1648     const VmaAllocatorCreateInfo* VMA_NOT_NULL pCreateInfo,
   1649     VmaAllocator VMA_NULLABLE* VMA_NOT_NULL pAllocator);
   1650 
   1651 /// Destroys allocator object.
   1652 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyAllocator(
   1653     VmaAllocator VMA_NULLABLE allocator);
   1654 
   1655 /** \brief Returns information about existing #VmaAllocator object - handle to Vulkan device etc.
   1656 
   1657 It might be useful if you want to keep just the #VmaAllocator handle and fetch other required handles to
   1658 `VkPhysicalDevice`, `VkDevice` etc. every time using this function.
   1659 */
   1660 VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocatorInfo(
   1661     VmaAllocator VMA_NOT_NULL allocator,
   1662     VmaAllocatorInfo* VMA_NOT_NULL pAllocatorInfo);
   1663 
   1664 /**
   1665 PhysicalDeviceProperties are fetched from physicalDevice by the allocator.
   1666 You can access it here, without fetching it again on your own.
   1667 */
   1668 VMA_CALL_PRE void VMA_CALL_POST vmaGetPhysicalDeviceProperties(
   1669     VmaAllocator VMA_NOT_NULL allocator,
   1670     const VkPhysicalDeviceProperties* VMA_NULLABLE* VMA_NOT_NULL ppPhysicalDeviceProperties);
   1671 
   1672 /**
   1673 PhysicalDeviceMemoryProperties are fetched from physicalDevice by the allocator.
   1674 You can access it here, without fetching it again on your own.
   1675 */
   1676 VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryProperties(
   1677     VmaAllocator VMA_NOT_NULL allocator,
   1678     const VkPhysicalDeviceMemoryProperties* VMA_NULLABLE* VMA_NOT_NULL ppPhysicalDeviceMemoryProperties);
   1679 
   1680 /**
   1681 \brief Given Memory Type Index, returns Property Flags of this memory type.
   1682 
   1683 This is just a convenience function. Same information can be obtained using
   1684 vmaGetMemoryProperties().
   1685 */
   1686 VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryTypeProperties(
   1687     VmaAllocator VMA_NOT_NULL allocator,
   1688     uint32_t memoryTypeIndex,
   1689     VkMemoryPropertyFlags* VMA_NOT_NULL pFlags);
   1690 
   1691 /** \brief Sets index of the current frame.
   1692 */
   1693 VMA_CALL_PRE void VMA_CALL_POST vmaSetCurrentFrameIndex(
   1694     VmaAllocator VMA_NOT_NULL allocator,
   1695     uint32_t frameIndex);
   1696 
   1697 /** @} */
   1698 
   1699 /**
   1700 \addtogroup group_stats
   1701 @{
   1702 */
   1703 
   1704 /** \brief Retrieves statistics from current state of the Allocator.
   1705 
   1706 This function is called "calculate" not "get" because it has to traverse all
   1707 internal data structures, so it may be quite slow. Use it for debugging purposes.
   1708 For faster but more brief statistics suitable to be called every frame or every allocation,
   1709 use vmaGetHeapBudgets().
   1710 
   1711 Note that when using allocator from multiple threads, returned information may immediately
   1712 become outdated.
   1713 */
   1714 VMA_CALL_PRE void VMA_CALL_POST vmaCalculateStatistics(
   1715     VmaAllocator VMA_NOT_NULL allocator,
   1716     VmaTotalStatistics* VMA_NOT_NULL pStats);
   1717 
   1718 /** \brief Retrieves information about current memory usage and budget for all memory heaps.
   1719 
   1720 \param allocator
   1721 \param[out] pBudgets Must point to array with number of elements at least equal to number of memory heaps in physical device used.
   1722 
   1723 This function is called "get" not "calculate" because it is very fast, suitable to be called
   1724 every frame or every allocation. For more detailed statistics use vmaCalculateStatistics().
   1725 
   1726 Note that when using allocator from multiple threads, returned information may immediately
   1727 become outdated.
   1728 */
   1729 VMA_CALL_PRE void VMA_CALL_POST vmaGetHeapBudgets(
   1730     VmaAllocator VMA_NOT_NULL allocator,
   1731     VmaBudget* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL("VkPhysicalDeviceMemoryProperties::memoryHeapCount") pBudgets);
   1732 
   1733 /** @} */
   1734 
   1735 /**
   1736 \addtogroup group_alloc
   1737 @{
   1738 */
   1739 
   1740 /**
   1741 \brief Helps to find memoryTypeIndex, given memoryTypeBits and VmaAllocationCreateInfo.
   1742 
   1743 This algorithm tries to find a memory type that:
   1744 
   1745 - Is allowed by memoryTypeBits.
   1746 - Contains all the flags from pAllocationCreateInfo->requiredFlags.
   1747 - Matches intended usage.
   1748 - Has as many flags from pAllocationCreateInfo->preferredFlags as possible.
   1749 
   1750 \return Returns VK_ERROR_FEATURE_NOT_PRESENT if not found. Receiving such result
   1751 from this function or any other allocating function probably means that your
   1752 device doesn't support any memory type with requested features for the specific
   1753 type of resource you want to use it for. Please check parameters of your
   1754 resource, like image layout (OPTIMAL versus LINEAR) or mip level count.
   1755 */
   1756 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndex(
   1757     VmaAllocator VMA_NOT_NULL allocator,
   1758     uint32_t memoryTypeBits,
   1759     const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
   1760     uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
   1761 
   1762 /**
   1763 \brief Helps to find memoryTypeIndex, given VkBufferCreateInfo and VmaAllocationCreateInfo.
   1764 
   1765 It can be useful e.g. to determine value to be used as VmaPoolCreateInfo::memoryTypeIndex.
   1766 It internally creates a temporary, dummy buffer that never has memory bound.
   1767 */
   1768 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForBufferInfo(
   1769     VmaAllocator VMA_NOT_NULL allocator,
   1770     const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
   1771     const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
   1772     uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
   1773 
   1774 /**
   1775 \brief Helps to find memoryTypeIndex, given VkImageCreateInfo and VmaAllocationCreateInfo.
   1776 
   1777 It can be useful e.g. to determine value to be used as VmaPoolCreateInfo::memoryTypeIndex.
   1778 It internally creates a temporary, dummy image that never has memory bound.
   1779 */
   1780 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForImageInfo(
   1781     VmaAllocator VMA_NOT_NULL allocator,
   1782     const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
   1783     const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
   1784     uint32_t* VMA_NOT_NULL pMemoryTypeIndex);
   1785 
   1786 /** \brief Allocates Vulkan device memory and creates #VmaPool object.
   1787 
   1788 \param allocator Allocator object.
   1789 \param pCreateInfo Parameters of pool to create.
   1790 \param[out] pPool Handle to created pool.
   1791 */
   1792 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreatePool(
   1793     VmaAllocator VMA_NOT_NULL allocator,
   1794     const VmaPoolCreateInfo* VMA_NOT_NULL pCreateInfo,
   1795     VmaPool VMA_NULLABLE* VMA_NOT_NULL pPool);
   1796 
   1797 /** \brief Destroys #VmaPool object and frees Vulkan device memory.
   1798 */
   1799 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyPool(
   1800     VmaAllocator VMA_NOT_NULL allocator,
   1801     VmaPool VMA_NULLABLE pool);
   1802 
   1803 /** @} */
   1804 
   1805 /**
   1806 \addtogroup group_stats
   1807 @{
   1808 */
   1809 
   1810 /** \brief Retrieves statistics of existing #VmaPool object.
   1811 
   1812 \param allocator Allocator object.
   1813 \param pool Pool object.
   1814 \param[out] pPoolStats Statistics of specified pool.
   1815 */
   1816 VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolStatistics(
   1817     VmaAllocator VMA_NOT_NULL allocator,
   1818     VmaPool VMA_NOT_NULL pool,
   1819     VmaStatistics* VMA_NOT_NULL pPoolStats);
   1820 
   1821 /** \brief Retrieves detailed statistics of existing #VmaPool object.
   1822 
   1823 \param allocator Allocator object.
   1824 \param pool Pool object.
   1825 \param[out] pPoolStats Statistics of specified pool.
   1826 */
   1827 VMA_CALL_PRE void VMA_CALL_POST vmaCalculatePoolStatistics(
   1828     VmaAllocator VMA_NOT_NULL allocator,
   1829     VmaPool VMA_NOT_NULL pool,
   1830     VmaDetailedStatistics* VMA_NOT_NULL pPoolStats);
   1831 
   1832 /** @} */
   1833 
   1834 /**
   1835 \addtogroup group_alloc
   1836 @{
   1837 */
   1838 
   1839 /** \brief Checks magic number in margins around all allocations in given memory pool in search for corruptions.
   1840 
   1841 Corruption detection is enabled only when `VMA_DEBUG_DETECT_CORRUPTION` macro is defined to nonzero,
   1842 `VMA_DEBUG_MARGIN` is defined to nonzero and the pool is created in memory type that is
   1843 `HOST_VISIBLE` and `HOST_COHERENT`. For more information, see [Corruption detection](@ref debugging_memory_usage_corruption_detection).
   1844 
   1845 Possible return values:
   1846 
   1847 - `VK_ERROR_FEATURE_NOT_PRESENT` - corruption detection is not enabled for specified pool.
   1848 - `VK_SUCCESS` - corruption detection has been performed and succeeded.
   1849 - `VK_ERROR_UNKNOWN` - corruption detection has been performed and found memory corruptions around one of the allocations.
   1850   `VMA_ASSERT` is also fired in that case.
   1851 - Other value: Error returned by Vulkan, e.g. memory mapping failure.
   1852 */
   1853 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckPoolCorruption(
   1854     VmaAllocator VMA_NOT_NULL allocator,
   1855     VmaPool VMA_NOT_NULL pool);
   1856 
   1857 /** \brief Retrieves name of a custom pool.
   1858 
   1859 After the call `ppName` is either null or points to an internally-owned null-terminated string
   1860 containing name of the pool that was previously set. The pointer becomes invalid when the pool is
   1861 destroyed or its name is changed using vmaSetPoolName().
   1862 */
   1863 VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolName(
   1864     VmaAllocator VMA_NOT_NULL allocator,
   1865     VmaPool VMA_NOT_NULL pool,
   1866     const char* VMA_NULLABLE* VMA_NOT_NULL ppName);
   1867 
   1868 /** \brief Sets name of a custom pool.
   1869 
   1870 `pName` can be either null or pointer to a null-terminated string with new name for the pool.
   1871 Function makes internal copy of the string, so it can be changed or freed immediately after this call.
   1872 */
   1873 VMA_CALL_PRE void VMA_CALL_POST vmaSetPoolName(
   1874     VmaAllocator VMA_NOT_NULL allocator,
   1875     VmaPool VMA_NOT_NULL pool,
   1876     const char* VMA_NULLABLE pName);
   1877 
   1878 /** \brief General purpose memory allocation.
   1879 
   1880 \param allocator
   1881 \param pVkMemoryRequirements
   1882 \param pCreateInfo
   1883 \param[out] pAllocation Handle to allocated memory.
   1884 \param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
   1885 
   1886 You should free the memory using vmaFreeMemory() or vmaFreeMemoryPages().
   1887 
   1888 It is recommended to use vmaAllocateMemoryForBuffer(), vmaAllocateMemoryForImage(),
   1889 vmaCreateBuffer(), vmaCreateImage() instead whenever possible.
   1890 */
   1891 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemory(
   1892     VmaAllocator VMA_NOT_NULL allocator,
   1893     const VkMemoryRequirements* VMA_NOT_NULL pVkMemoryRequirements,
   1894     const VmaAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
   1895     VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
   1896     VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
   1897 
   1898 /** \brief General purpose memory allocation for multiple allocation objects at once.
   1899 
   1900 \param allocator Allocator object.
   1901 \param pVkMemoryRequirements Memory requirements for each allocation.
   1902 \param pCreateInfo Creation parameters for each allocation.
   1903 \param allocationCount Number of allocations to make.
   1904 \param[out] pAllocations Pointer to array that will be filled with handles to created allocations.
   1905 \param[out] pAllocationInfo Optional. Pointer to array that will be filled with parameters of created allocations.
   1906 
   1907 You should free the memory using vmaFreeMemory() or vmaFreeMemoryPages().
   1908 
   1909 Word "pages" is just a suggestion to use this function to allocate pieces of memory needed for sparse binding.
   1910 It is just a general purpose allocation function able to make multiple allocations at once.
   1911 It may be internally optimized to be more efficient than calling vmaAllocateMemory() `allocationCount` times.
   1912 
   1913 All allocations are made using same parameters. All of them are created out of the same memory pool and type.
   1914 If any allocation fails, all allocations already made within this function call are also freed, so that when
   1915 returned result is not `VK_SUCCESS`, `pAllocation` array is always entirely filled with `VK_NULL_HANDLE`.
   1916 */
   1917 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryPages(
   1918     VmaAllocator VMA_NOT_NULL allocator,
   1919     const VkMemoryRequirements* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pVkMemoryRequirements,
   1920     const VmaAllocationCreateInfo* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pCreateInfo,
   1921     size_t allocationCount,
   1922     VmaAllocation VMA_NULLABLE* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pAllocations,
   1923     VmaAllocationInfo* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) pAllocationInfo);
   1924 
   1925 /** \brief Allocates memory suitable for given `VkBuffer`.
   1926 
   1927 \param allocator
   1928 \param buffer
   1929 \param pCreateInfo
   1930 \param[out] pAllocation Handle to allocated memory.
   1931 \param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
   1932 
   1933 It only creates #VmaAllocation. To bind the memory to the buffer, use vmaBindBufferMemory().
   1934 
   1935 This is a special-purpose function. In most cases you should use vmaCreateBuffer().
   1936 
   1937 You must free the allocation using vmaFreeMemory() when no longer needed.
   1938 */
   1939 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForBuffer(
   1940     VmaAllocator VMA_NOT_NULL allocator,
   1941     VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer,
   1942     const VmaAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
   1943     VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
   1944     VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
   1945 
   1946 /** \brief Allocates memory suitable for given `VkImage`.
   1947 
   1948 \param allocator
   1949 \param image
   1950 \param pCreateInfo
   1951 \param[out] pAllocation Handle to allocated memory.
   1952 \param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
   1953 
   1954 It only creates #VmaAllocation. To bind the memory to the buffer, use vmaBindImageMemory().
   1955 
   1956 This is a special-purpose function. In most cases you should use vmaCreateImage().
   1957 
   1958 You must free the allocation using vmaFreeMemory() when no longer needed.
   1959 */
   1960 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForImage(
   1961     VmaAllocator VMA_NOT_NULL allocator,
   1962     VkImage VMA_NOT_NULL_NON_DISPATCHABLE image,
   1963     const VmaAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
   1964     VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
   1965     VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
   1966 
   1967 /** \brief Frees memory previously allocated using vmaAllocateMemory(), vmaAllocateMemoryForBuffer(), or vmaAllocateMemoryForImage().
   1968 
   1969 Passing `VK_NULL_HANDLE` as `allocation` is valid. Such function call is just skipped.
   1970 */
   1971 VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemory(
   1972     VmaAllocator VMA_NOT_NULL allocator,
   1973     const VmaAllocation VMA_NULLABLE allocation);
   1974 
   1975 /** \brief Frees memory and destroys multiple allocations.
   1976 
   1977 Word "pages" is just a suggestion to use this function to free pieces of memory used for sparse binding.
   1978 It is just a general purpose function to free memory and destroy allocations made using e.g. vmaAllocateMemory(),
   1979 vmaAllocateMemoryPages() and other functions.
   1980 It may be internally optimized to be more efficient than calling vmaFreeMemory() `allocationCount` times.
   1981 
   1982 Allocations in `pAllocations` array can come from any memory pools and types.
   1983 Passing `VK_NULL_HANDLE` as elements of `pAllocations` array is valid. Such entries are just skipped.
   1984 */
   1985 VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemoryPages(
   1986     VmaAllocator VMA_NOT_NULL allocator,
   1987     size_t allocationCount,
   1988     const VmaAllocation VMA_NULLABLE* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(allocationCount) pAllocations);
   1989 
   1990 /** \brief Returns current information about specified allocation.
   1991 
   1992 Current parameters of given allocation are returned in `pAllocationInfo`.
   1993 
   1994 Although this function doesn't lock any mutex, so it should be quite efficient,
   1995 you should avoid calling it too often.
   1996 You can retrieve same VmaAllocationInfo structure while creating your resource, from function
   1997 vmaCreateBuffer(), vmaCreateImage(). You can remember it if you are sure parameters don't change
   1998 (e.g. due to defragmentation).
   1999 
   2000 There is also a new function vmaGetAllocationInfo2() that offers extended information
   2001 about the allocation, returned using new structure #VmaAllocationInfo2.
   2002 */
   2003 VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationInfo(
   2004     VmaAllocator VMA_NOT_NULL allocator,
   2005     VmaAllocation VMA_NOT_NULL allocation,
   2006     VmaAllocationInfo* VMA_NOT_NULL pAllocationInfo);
   2007 
   2008 /** \brief Returns extended information about specified allocation.
   2009 
   2010 Current parameters of given allocation are returned in `pAllocationInfo`.
   2011 Extended parameters in structure #VmaAllocationInfo2 include memory block size
   2012 and a flag telling whether the allocation has dedicated memory.
   2013 It can be useful e.g. for interop with OpenGL.
   2014 */
   2015 VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationInfo2(
   2016     VmaAllocator VMA_NOT_NULL allocator,
   2017     VmaAllocation VMA_NOT_NULL allocation,
   2018     VmaAllocationInfo2* VMA_NOT_NULL pAllocationInfo);
   2019 
   2020 /** \brief Sets pUserData in given allocation to new value.
   2021 
   2022 The value of pointer `pUserData` is copied to allocation's `pUserData`.
   2023 It is opaque, so you can use it however you want - e.g.
   2024 as a pointer, ordinal number or some handle to you own data.
   2025 */
   2026 VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationUserData(
   2027     VmaAllocator VMA_NOT_NULL allocator,
   2028     VmaAllocation VMA_NOT_NULL allocation,
   2029     void* VMA_NULLABLE pUserData);
   2030 
   2031 /** \brief Sets pName in given allocation to new value.
   2032 
   2033 `pName` must be either null, or pointer to a null-terminated string. The function
   2034 makes local copy of the string and sets it as allocation's `pName`. String
   2035 passed as pName doesn't need to be valid for whole lifetime of the allocation -
   2036 you can free it after this call. String previously pointed by allocation's
   2037 `pName` is freed from memory.
   2038 */
   2039 VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationName(
   2040     VmaAllocator VMA_NOT_NULL allocator,
   2041     VmaAllocation VMA_NOT_NULL allocation,
   2042     const char* VMA_NULLABLE pName);
   2043 
   2044 /**
   2045 \brief Given an allocation, returns Property Flags of its memory type.
   2046 
   2047 This is just a convenience function. Same information can be obtained using
   2048 vmaGetAllocationInfo() + vmaGetMemoryProperties().
   2049 */
   2050 VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationMemoryProperties(
   2051     VmaAllocator VMA_NOT_NULL allocator,
   2052     VmaAllocation VMA_NOT_NULL allocation,
   2053     VkMemoryPropertyFlags* VMA_NOT_NULL pFlags);
   2054 
   2055 /** \brief Maps memory represented by given allocation and returns pointer to it.
   2056 
   2057 Maps memory represented by given allocation to make it accessible to CPU code.
   2058 When succeeded, `*ppData` contains pointer to first byte of this memory.
   2059 
   2060 \warning
   2061 If the allocation is part of a bigger `VkDeviceMemory` block, returned pointer is
   2062 correctly offsetted to the beginning of region assigned to this particular allocation.
   2063 Unlike the result of `vkMapMemory`, it points to the allocation, not to the beginning of the whole block.
   2064 You should not add VmaAllocationInfo::offset to it!
   2065 
   2066 Mapping is internally reference-counted and synchronized, so despite raw Vulkan
   2067 function `vkMapMemory()` cannot be used to map same block of `VkDeviceMemory`
   2068 multiple times simultaneously, it is safe to call this function on allocations
   2069 assigned to the same memory block. Actual Vulkan memory will be mapped on first
   2070 mapping and unmapped on last unmapping.
   2071 
   2072 If the function succeeded, you must call vmaUnmapMemory() to unmap the
   2073 allocation when mapping is no longer needed or before freeing the allocation, at
   2074 the latest.
   2075 
   2076 It also safe to call this function multiple times on the same allocation. You
   2077 must call vmaUnmapMemory() same number of times as you called vmaMapMemory().
   2078 
   2079 It is also safe to call this function on allocation created with
   2080 #VMA_ALLOCATION_CREATE_MAPPED_BIT flag. Its memory stays mapped all the time.
   2081 You must still call vmaUnmapMemory() same number of times as you called
   2082 vmaMapMemory(). You must not call vmaUnmapMemory() additional time to free the
   2083 "0-th" mapping made automatically due to #VMA_ALLOCATION_CREATE_MAPPED_BIT flag.
   2084 
   2085 This function fails when used on allocation made in memory type that is not
   2086 `HOST_VISIBLE`.
   2087 
   2088 This function doesn't automatically flush or invalidate caches.
   2089 If the allocation is made from a memory types that is not `HOST_COHERENT`,
   2090 you also need to use vmaInvalidateAllocation() / vmaFlushAllocation(), as required by Vulkan specification.
   2091 */
   2092 VMA_CALL_PRE VkResult VMA_CALL_POST vmaMapMemory(
   2093     VmaAllocator VMA_NOT_NULL allocator,
   2094     VmaAllocation VMA_NOT_NULL allocation,
   2095     void* VMA_NULLABLE* VMA_NOT_NULL ppData);
   2096 
   2097 /** \brief Unmaps memory represented by given allocation, mapped previously using vmaMapMemory().
   2098 
   2099 For details, see description of vmaMapMemory().
   2100 
   2101 This function doesn't automatically flush or invalidate caches.
   2102 If the allocation is made from a memory types that is not `HOST_COHERENT`,
   2103 you also need to use vmaInvalidateAllocation() / vmaFlushAllocation(), as required by Vulkan specification.
   2104 */
   2105 VMA_CALL_PRE void VMA_CALL_POST vmaUnmapMemory(
   2106     VmaAllocator VMA_NOT_NULL allocator,
   2107     VmaAllocation VMA_NOT_NULL allocation);
   2108 
   2109 /** \brief Flushes memory of given allocation.
   2110 
   2111 Calls `vkFlushMappedMemoryRanges()` for memory associated with given range of given allocation.
   2112 It needs to be called after writing to a mapped memory for memory types that are not `HOST_COHERENT`.
   2113 Unmap operation doesn't do that automatically.
   2114 
   2115 - `offset` must be relative to the beginning of allocation.
   2116 - `size` can be `VK_WHOLE_SIZE`. It means all memory from `offset` the the end of given allocation.
   2117 - `offset` and `size` don't have to be aligned.
   2118   They are internally rounded down/up to multiply of `nonCoherentAtomSize`.
   2119 - If `size` is 0, this call is ignored.
   2120 - If memory type that the `allocation` belongs to is not `HOST_VISIBLE` or it is `HOST_COHERENT`,
   2121   this call is ignored.
   2122 
   2123 Warning! `offset` and `size` are relative to the contents of given `allocation`.
   2124 If you mean whole allocation, you can pass 0 and `VK_WHOLE_SIZE`, respectively.
   2125 Do not pass allocation's offset as `offset`!!!
   2126 
   2127 This function returns the `VkResult` from `vkFlushMappedMemoryRanges` if it is
   2128 called, otherwise `VK_SUCCESS`.
   2129 */
   2130 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocation(
   2131     VmaAllocator VMA_NOT_NULL allocator,
   2132     VmaAllocation VMA_NOT_NULL allocation,
   2133     VkDeviceSize offset,
   2134     VkDeviceSize size);
   2135 
   2136 /** \brief Invalidates memory of given allocation.
   2137 
   2138 Calls `vkInvalidateMappedMemoryRanges()` for memory associated with given range of given allocation.
   2139 It needs to be called before reading from a mapped memory for memory types that are not `HOST_COHERENT`.
   2140 Map operation doesn't do that automatically.
   2141 
   2142 - `offset` must be relative to the beginning of allocation.
   2143 - `size` can be `VK_WHOLE_SIZE`. It means all memory from `offset` the the end of given allocation.
   2144 - `offset` and `size` don't have to be aligned.
   2145   They are internally rounded down/up to multiply of `nonCoherentAtomSize`.
   2146 - If `size` is 0, this call is ignored.
   2147 - If memory type that the `allocation` belongs to is not `HOST_VISIBLE` or it is `HOST_COHERENT`,
   2148   this call is ignored.
   2149 
   2150 Warning! `offset` and `size` are relative to the contents of given `allocation`.
   2151 If you mean whole allocation, you can pass 0 and `VK_WHOLE_SIZE`, respectively.
   2152 Do not pass allocation's offset as `offset`!!!
   2153 
   2154 This function returns the `VkResult` from `vkInvalidateMappedMemoryRanges` if
   2155 it is called, otherwise `VK_SUCCESS`.
   2156 */
   2157 VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocation(
   2158     VmaAllocator VMA_NOT_NULL allocator,
   2159     VmaAllocation VMA_NOT_NULL allocation,
   2160     VkDeviceSize offset,
   2161     VkDeviceSize size);
   2162 
   2163 /** \brief Flushes memory of given set of allocations.
   2164 
   2165 Calls `vkFlushMappedMemoryRanges()` for memory associated with given ranges of given allocations.
   2166 For more information, see documentation of vmaFlushAllocation().
   2167 
   2168 \param allocator
   2169 \param allocationCount
   2170 \param allocations
   2171 \param offsets If not null, it must point to an array of offsets of regions to flush, relative to the beginning of respective allocations. Null means all offsets are zero.
   2172 \param sizes If not null, it must point to an array of sizes of regions to flush in respective allocations. Null means `VK_WHOLE_SIZE` for all allocations.
   2173 
   2174 This function returns the `VkResult` from `vkFlushMappedMemoryRanges` if it is
   2175 called, otherwise `VK_SUCCESS`.
   2176 */
   2177 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocations(
   2178     VmaAllocator VMA_NOT_NULL allocator,
   2179     uint32_t allocationCount,
   2180     const VmaAllocation VMA_NOT_NULL* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) allocations,
   2181     const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) offsets,
   2182     const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) sizes);
   2183 
   2184 /** \brief Invalidates memory of given set of allocations.
   2185 
   2186 Calls `vkInvalidateMappedMemoryRanges()` for memory associated with given ranges of given allocations.
   2187 For more information, see documentation of vmaInvalidateAllocation().
   2188 
   2189 \param allocator
   2190 \param allocationCount
   2191 \param allocations
   2192 \param offsets If not null, it must point to an array of offsets of regions to flush, relative to the beginning of respective allocations. Null means all offsets are zero.
   2193 \param sizes If not null, it must point to an array of sizes of regions to flush in respective allocations. Null means `VK_WHOLE_SIZE` for all allocations.
   2194 
   2195 This function returns the `VkResult` from `vkInvalidateMappedMemoryRanges` if it is
   2196 called, otherwise `VK_SUCCESS`.
   2197 */
   2198 VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocations(
   2199     VmaAllocator VMA_NOT_NULL allocator,
   2200     uint32_t allocationCount,
   2201     const VmaAllocation VMA_NOT_NULL* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) allocations,
   2202     const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) offsets,
   2203     const VkDeviceSize* VMA_NULLABLE VMA_LEN_IF_NOT_NULL(allocationCount) sizes);
   2204 
   2205 /** \brief Maps the allocation temporarily if needed, copies data from specified host pointer to it, and flushes the memory from the host caches if needed.
   2206 
   2207 \param allocator
   2208 \param pSrcHostPointer Pointer to the host data that become source of the copy.
   2209 \param dstAllocation   Handle to the allocation that becomes destination of the copy.
   2210 \param dstAllocationLocalOffset  Offset within `dstAllocation` where to write copied data, in bytes.
   2211 \param size            Number of bytes to copy.
   2212 
   2213 This is a convenience function that allows to copy data from a host pointer to an allocation easily.
   2214 Same behavior can be achieved by calling vmaMapMemory(), `memcpy()`, vmaUnmapMemory(), vmaFlushAllocation().
   2215 
   2216 This function can be called only for allocations created in a memory type that has `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT` flag.
   2217 It can be ensured e.g. by using #VMA_MEMORY_USAGE_AUTO and #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or
   2218 #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
   2219 Otherwise, the function will fail and generate a Validation Layers error.
   2220 
   2221 `dstAllocationLocalOffset` is relative to the contents of given `dstAllocation`.
   2222 If you mean whole allocation, you should pass 0.
   2223 Do not pass allocation's offset within device memory block this parameter!
   2224 */
   2225 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCopyMemoryToAllocation(
   2226     VmaAllocator VMA_NOT_NULL allocator,
   2227     const void* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(size) pSrcHostPointer,
   2228     VmaAllocation VMA_NOT_NULL dstAllocation,
   2229     VkDeviceSize dstAllocationLocalOffset,
   2230     VkDeviceSize size);
   2231 
   2232 /** \brief Invalidates memory in the host caches if needed, maps the allocation temporarily if needed, and copies data from it to a specified host pointer.
   2233 
   2234 \param allocator
   2235 \param srcAllocation   Handle to the allocation that becomes source of the copy.
   2236 \param srcAllocationLocalOffset  Offset within `srcAllocation` where to read copied data, in bytes.
   2237 \param pDstHostPointer Pointer to the host memory that become destination of the copy.
   2238 \param size            Number of bytes to copy.
   2239 
   2240 This is a convenience function that allows to copy data from an allocation to a host pointer easily.
   2241 Same behavior can be achieved by calling vmaInvalidateAllocation(), vmaMapMemory(), `memcpy()`, vmaUnmapMemory().
   2242 
   2243 This function should be called only for allocations created in a memory type that has `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`
   2244 and `VK_MEMORY_PROPERTY_HOST_CACHED_BIT` flag.
   2245 It can be ensured e.g. by using #VMA_MEMORY_USAGE_AUTO and #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
   2246 Otherwise, the function may fail and generate a Validation Layers error.
   2247 It may also work very slowly when reading from an uncached memory.
   2248 
   2249 `srcAllocationLocalOffset` is relative to the contents of given `srcAllocation`.
   2250 If you mean whole allocation, you should pass 0.
   2251 Do not pass allocation's offset within device memory block as this parameter!
   2252 */
   2253 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCopyAllocationToMemory(
   2254     VmaAllocator VMA_NOT_NULL allocator,
   2255     VmaAllocation VMA_NOT_NULL srcAllocation,
   2256     VkDeviceSize srcAllocationLocalOffset,
   2257     void* VMA_NOT_NULL VMA_LEN_IF_NOT_NULL(size) pDstHostPointer,
   2258     VkDeviceSize size);
   2259 
   2260 /** \brief Checks magic number in margins around all allocations in given memory types (in both default and custom pools) in search for corruptions.
   2261 
   2262 \param allocator
   2263 \param memoryTypeBits Bit mask, where each bit set means that a memory type with that index should be checked.
   2264 
   2265 Corruption detection is enabled only when `VMA_DEBUG_DETECT_CORRUPTION` macro is defined to nonzero,
   2266 `VMA_DEBUG_MARGIN` is defined to nonzero and only for memory types that are
   2267 `HOST_VISIBLE` and `HOST_COHERENT`. For more information, see [Corruption detection](@ref debugging_memory_usage_corruption_detection).
   2268 
   2269 Possible return values:
   2270 
   2271 - `VK_ERROR_FEATURE_NOT_PRESENT` - corruption detection is not enabled for any of specified memory types.
   2272 - `VK_SUCCESS` - corruption detection has been performed and succeeded.
   2273 - `VK_ERROR_UNKNOWN` - corruption detection has been performed and found memory corruptions around one of the allocations.
   2274   `VMA_ASSERT` is also fired in that case.
   2275 - Other value: Error returned by Vulkan, e.g. memory mapping failure.
   2276 */
   2277 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckCorruption(
   2278     VmaAllocator VMA_NOT_NULL allocator,
   2279     uint32_t memoryTypeBits);
   2280 
   2281 /** \brief Begins defragmentation process.
   2282 
   2283 \param allocator Allocator object.
   2284 \param pInfo Structure filled with parameters of defragmentation.
   2285 \param[out] pContext Context object that must be passed to vmaEndDefragmentation() to finish defragmentation.
   2286 \returns
   2287 - `VK_SUCCESS` if defragmentation can begin.
   2288 - `VK_ERROR_FEATURE_NOT_PRESENT` if defragmentation is not supported.
   2289 
   2290 For more information about defragmentation, see documentation chapter:
   2291 [Defragmentation](@ref defragmentation).
   2292 */
   2293 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentation(
   2294     VmaAllocator VMA_NOT_NULL allocator,
   2295     const VmaDefragmentationInfo* VMA_NOT_NULL pInfo,
   2296     VmaDefragmentationContext VMA_NULLABLE* VMA_NOT_NULL pContext);
   2297 
   2298 /** \brief Ends defragmentation process.
   2299 
   2300 \param allocator Allocator object.
   2301 \param context Context object that has been created by vmaBeginDefragmentation().
   2302 \param[out] pStats Optional stats for the defragmentation. Can be null.
   2303 
   2304 Use this function to finish defragmentation started by vmaBeginDefragmentation().
   2305 */
   2306 VMA_CALL_PRE void VMA_CALL_POST vmaEndDefragmentation(
   2307     VmaAllocator VMA_NOT_NULL allocator,
   2308     VmaDefragmentationContext VMA_NOT_NULL context,
   2309     VmaDefragmentationStats* VMA_NULLABLE pStats);
   2310 
   2311 /** \brief Starts single defragmentation pass.
   2312 
   2313 \param allocator Allocator object.
   2314 \param context Context object that has been created by vmaBeginDefragmentation().
   2315 \param[out] pPassInfo Computed information for current pass.
   2316 \returns
   2317 - `VK_SUCCESS` if no more moves are possible. Then you can omit call to vmaEndDefragmentationPass() and simply end whole defragmentation.
   2318 - `VK_INCOMPLETE` if there are pending moves returned in `pPassInfo`. You need to perform them, call vmaEndDefragmentationPass(),
   2319   and then preferably try another pass with vmaBeginDefragmentationPass().
   2320 */
   2321 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentationPass(
   2322     VmaAllocator VMA_NOT_NULL allocator,
   2323     VmaDefragmentationContext VMA_NOT_NULL context,
   2324     VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo);
   2325 
   2326 /** \brief Ends single defragmentation pass.
   2327 
   2328 \param allocator Allocator object.
   2329 \param context Context object that has been created by vmaBeginDefragmentation().
   2330 \param pPassInfo Computed information for current pass filled by vmaBeginDefragmentationPass() and possibly modified by you.
   2331 
   2332 Returns `VK_SUCCESS` if no more moves are possible or `VK_INCOMPLETE` if more defragmentations are possible.
   2333 
   2334 Ends incremental defragmentation pass and commits all defragmentation moves from `pPassInfo`.
   2335 After this call:
   2336 
   2337 - Allocations at `pPassInfo[i].srcAllocation` that had `pPassInfo[i].operation ==` #VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY
   2338   (which is the default) will be pointing to the new destination place.
   2339 - Allocation at `pPassInfo[i].srcAllocation` that had `pPassInfo[i].operation ==` #VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY
   2340   will be freed.
   2341 
   2342 If no more moves are possible you can end whole defragmentation.
   2343 */
   2344 VMA_CALL_PRE VkResult VMA_CALL_POST vmaEndDefragmentationPass(
   2345     VmaAllocator VMA_NOT_NULL allocator,
   2346     VmaDefragmentationContext VMA_NOT_NULL context,
   2347     VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo);
   2348 
   2349 /** \brief Binds buffer to allocation.
   2350 
   2351 Binds specified buffer to region of memory represented by specified allocation.
   2352 Gets `VkDeviceMemory` handle and offset from the allocation.
   2353 If you want to create a buffer, allocate memory for it and bind them together separately,
   2354 you should use this function for binding instead of standard `vkBindBufferMemory()`,
   2355 because it ensures proper synchronization so that when a `VkDeviceMemory` object is used by multiple
   2356 allocations, calls to `vkBind*Memory()` or `vkMapMemory()` won't happen from multiple threads simultaneously
   2357 (which is illegal in Vulkan).
   2358 
   2359 It is recommended to use function vmaCreateBuffer() instead of this one.
   2360 */
   2361 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory(
   2362     VmaAllocator VMA_NOT_NULL allocator,
   2363     VmaAllocation VMA_NOT_NULL allocation,
   2364     VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer);
   2365 
   2366 /** \brief Binds buffer to allocation with additional parameters.
   2367 
   2368 \param allocator
   2369 \param allocation
   2370 \param allocationLocalOffset Additional offset to be added while binding, relative to the beginning of the `allocation`. Normally it should be 0.
   2371 \param buffer
   2372 \param pNext A chain of structures to be attached to `VkBindBufferMemoryInfoKHR` structure used internally. Normally it should be null.
   2373 
   2374 This function is similar to vmaBindBufferMemory(), but it provides additional parameters.
   2375 
   2376 If `pNext` is not null, #VmaAllocator object must have been created with #VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT flag
   2377 or with VmaAllocatorCreateInfo::vulkanApiVersion `>= VK_API_VERSION_1_1`. Otherwise the call fails.
   2378 */
   2379 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory2(
   2380     VmaAllocator VMA_NOT_NULL allocator,
   2381     VmaAllocation VMA_NOT_NULL allocation,
   2382     VkDeviceSize allocationLocalOffset,
   2383     VkBuffer VMA_NOT_NULL_NON_DISPATCHABLE buffer,
   2384     const void* VMA_NULLABLE VMA_EXTENDS_VK_STRUCT(VkBindBufferMemoryInfoKHR) pNext);
   2385 
   2386 /** \brief Binds image to allocation.
   2387 
   2388 Binds specified image to region of memory represented by specified allocation.
   2389 Gets `VkDeviceMemory` handle and offset from the allocation.
   2390 If you want to create an image, allocate memory for it and bind them together separately,
   2391 you should use this function for binding instead of standard `vkBindImageMemory()`,
   2392 because it ensures proper synchronization so that when a `VkDeviceMemory` object is used by multiple
   2393 allocations, calls to `vkBind*Memory()` or `vkMapMemory()` won't happen from multiple threads simultaneously
   2394 (which is illegal in Vulkan).
   2395 
   2396 It is recommended to use function vmaCreateImage() instead of this one.
   2397 */
   2398 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory(
   2399     VmaAllocator VMA_NOT_NULL allocator,
   2400     VmaAllocation VMA_NOT_NULL allocation,
   2401     VkImage VMA_NOT_NULL_NON_DISPATCHABLE image);
   2402 
   2403 /** \brief Binds image to allocation with additional parameters.
   2404 
   2405 \param allocator
   2406 \param allocation
   2407 \param allocationLocalOffset Additional offset to be added while binding, relative to the beginning of the `allocation`. Normally it should be 0.
   2408 \param image
   2409 \param pNext A chain of structures to be attached to `VkBindImageMemoryInfoKHR` structure used internally. Normally it should be null.
   2410 
   2411 This function is similar to vmaBindImageMemory(), but it provides additional parameters.
   2412 
   2413 If `pNext` is not null, #VmaAllocator object must have been created with #VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT flag
   2414 or with VmaAllocatorCreateInfo::vulkanApiVersion `>= VK_API_VERSION_1_1`. Otherwise the call fails.
   2415 */
   2416 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory2(
   2417     VmaAllocator VMA_NOT_NULL allocator,
   2418     VmaAllocation VMA_NOT_NULL allocation,
   2419     VkDeviceSize allocationLocalOffset,
   2420     VkImage VMA_NOT_NULL_NON_DISPATCHABLE image,
   2421     const void* VMA_NULLABLE VMA_EXTENDS_VK_STRUCT(VkBindImageMemoryInfoKHR) pNext);
   2422 
   2423 /** \brief Creates a new `VkBuffer`, allocates and binds memory for it.
   2424 
   2425 \param allocator
   2426 \param pBufferCreateInfo
   2427 \param pAllocationCreateInfo
   2428 \param[out] pBuffer Buffer that was created.
   2429 \param[out] pAllocation Allocation that was created.
   2430 \param[out] pAllocationInfo Optional. Information about allocated memory. It can be later fetched using function vmaGetAllocationInfo().
   2431 
   2432 This function automatically:
   2433 
   2434 -# Creates buffer.
   2435 -# Allocates appropriate memory for it.
   2436 -# Binds the buffer with the memory.
   2437 
   2438 If any of these operations fail, buffer and allocation are not created,
   2439 returned value is negative error code, `*pBuffer` and `*pAllocation` are null.
   2440 
   2441 If the function succeeded, you must destroy both buffer and allocation when you
   2442 no longer need them using either convenience function vmaDestroyBuffer() or
   2443 separately, using `vkDestroyBuffer()` and vmaFreeMemory().
   2444 
   2445 If #VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT flag was used,
   2446 VK_KHR_dedicated_allocation extension is used internally to query driver whether
   2447 it requires or prefers the new buffer to have dedicated allocation. If yes,
   2448 and if dedicated allocation is possible
   2449 (#VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT is not used), it creates dedicated
   2450 allocation for this buffer, just like when using
   2451 #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
   2452 
   2453 \note This function creates a new `VkBuffer`. Sub-allocation of parts of one large buffer,
   2454 although recommended as a good practice, is out of scope of this library and could be implemented
   2455 by the user as a higher-level logic on top of VMA.
   2456 */
   2457 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBuffer(
   2458     VmaAllocator VMA_NOT_NULL allocator,
   2459     const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
   2460     const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
   2461     VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer,
   2462     VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
   2463     VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
   2464 
   2465 /** \brief Creates a buffer with additional minimum alignment.
   2466 
   2467 Similar to vmaCreateBuffer() but provides additional parameter `minAlignment` which allows to specify custom,
   2468 minimum alignment to be used when placing the buffer inside a larger memory block, which may be needed e.g.
   2469 for interop with OpenGL.
   2470 */
   2471 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBufferWithAlignment(
   2472     VmaAllocator VMA_NOT_NULL allocator,
   2473     const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
   2474     const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
   2475     VkDeviceSize minAlignment,
   2476     VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer,
   2477     VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
   2478     VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
   2479 
   2480 /** \brief Creates a new `VkBuffer`, binds already created memory for it.
   2481 
   2482 \param allocator
   2483 \param allocation Allocation that provides memory to be used for binding new buffer to it.
   2484 \param pBufferCreateInfo
   2485 \param[out] pBuffer Buffer that was created.
   2486 
   2487 This function automatically:
   2488 
   2489 -# Creates buffer.
   2490 -# Binds the buffer with the supplied memory.
   2491 
   2492 If any of these operations fail, buffer is not created,
   2493 returned value is negative error code and `*pBuffer` is null.
   2494 
   2495 If the function succeeded, you must destroy the buffer when you
   2496 no longer need it using `vkDestroyBuffer()`. If you want to also destroy the corresponding
   2497 allocation you can use convenience function vmaDestroyBuffer().
   2498 
   2499 \note There is a new version of this function augmented with parameter `allocationLocalOffset` - see vmaCreateAliasingBuffer2().
   2500 */
   2501 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingBuffer(
   2502     VmaAllocator VMA_NOT_NULL allocator,
   2503     VmaAllocation VMA_NOT_NULL allocation,
   2504     const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
   2505     VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer);
   2506 
   2507 /** \brief Creates a new `VkBuffer`, binds already created memory for it.
   2508 
   2509 \param allocator
   2510 \param allocation Allocation that provides memory to be used for binding new buffer to it.
   2511 \param allocationLocalOffset Additional offset to be added while binding, relative to the beginning of the allocation. Normally it should be 0.
   2512 \param pBufferCreateInfo 
   2513 \param[out] pBuffer Buffer that was created.
   2514 
   2515 This function automatically:
   2516 
   2517 -# Creates buffer.
   2518 -# Binds the buffer with the supplied memory.
   2519 
   2520 If any of these operations fail, buffer is not created,
   2521 returned value is negative error code and `*pBuffer` is null.
   2522 
   2523 If the function succeeded, you must destroy the buffer when you
   2524 no longer need it using `vkDestroyBuffer()`. If you want to also destroy the corresponding
   2525 allocation you can use convenience function vmaDestroyBuffer().
   2526 
   2527 \note This is a new version of the function augmented with parameter `allocationLocalOffset`.
   2528 */
   2529 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingBuffer2(
   2530     VmaAllocator VMA_NOT_NULL allocator,
   2531     VmaAllocation VMA_NOT_NULL allocation,
   2532     VkDeviceSize allocationLocalOffset,
   2533     const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
   2534     VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer);
   2535 
   2536 /** \brief Destroys Vulkan buffer and frees allocated memory.
   2537 
   2538 This is just a convenience function equivalent to:
   2539 
   2540 \code
   2541 vkDestroyBuffer(device, buffer, allocationCallbacks);
   2542 vmaFreeMemory(allocator, allocation);
   2543 \endcode
   2544 
   2545 It is safe to pass null as buffer and/or allocation.
   2546 */
   2547 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyBuffer(
   2548     VmaAllocator VMA_NOT_NULL allocator,
   2549     VkBuffer VMA_NULLABLE_NON_DISPATCHABLE buffer,
   2550     VmaAllocation VMA_NULLABLE allocation);
   2551 
   2552 /// Function similar to vmaCreateBuffer().
   2553 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateImage(
   2554     VmaAllocator VMA_NOT_NULL allocator,
   2555     const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
   2556     const VmaAllocationCreateInfo* VMA_NOT_NULL pAllocationCreateInfo,
   2557     VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage,
   2558     VmaAllocation VMA_NULLABLE* VMA_NOT_NULL pAllocation,
   2559     VmaAllocationInfo* VMA_NULLABLE pAllocationInfo);
   2560 
   2561 /// Function similar to vmaCreateAliasingBuffer() but for images.
   2562 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingImage(
   2563     VmaAllocator VMA_NOT_NULL allocator,
   2564     VmaAllocation VMA_NOT_NULL allocation,
   2565     const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
   2566     VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage);
   2567 
   2568 /// Function similar to vmaCreateAliasingBuffer2() but for images.
   2569 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingImage2(
   2570     VmaAllocator VMA_NOT_NULL allocator,
   2571     VmaAllocation VMA_NOT_NULL allocation,
   2572     VkDeviceSize allocationLocalOffset,
   2573     const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
   2574     VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage);
   2575 
   2576 /** \brief Destroys Vulkan image and frees allocated memory.
   2577 
   2578 This is just a convenience function equivalent to:
   2579 
   2580 \code
   2581 vkDestroyImage(device, image, allocationCallbacks);
   2582 vmaFreeMemory(allocator, allocation);
   2583 \endcode
   2584 
   2585 It is safe to pass null as image and/or allocation.
   2586 */
   2587 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyImage(
   2588     VmaAllocator VMA_NOT_NULL allocator,
   2589     VkImage VMA_NULLABLE_NON_DISPATCHABLE image,
   2590     VmaAllocation VMA_NULLABLE allocation);
   2591 
   2592 /** @} */
   2593 
   2594 /**
   2595 \addtogroup group_virtual
   2596 @{
   2597 */
   2598 
   2599 /** \brief Creates new #VmaVirtualBlock object.
   2600 
   2601 \param pCreateInfo Parameters for creation.
   2602 \param[out] pVirtualBlock Returned virtual block object or `VMA_NULL` if creation failed.
   2603 */
   2604 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateVirtualBlock(
   2605     const VmaVirtualBlockCreateInfo* VMA_NOT_NULL pCreateInfo,
   2606     VmaVirtualBlock VMA_NULLABLE* VMA_NOT_NULL pVirtualBlock);
   2607 
   2608 /** \brief Destroys #VmaVirtualBlock object.
   2609 
   2610 Please note that you should consciously handle virtual allocations that could remain unfreed in the block.
   2611 You should either free them individually using vmaVirtualFree() or call vmaClearVirtualBlock()
   2612 if you are sure this is what you want. If you do neither, an assert is called.
   2613 
   2614 If you keep pointers to some additional metadata associated with your virtual allocations in their `pUserData`,
   2615 don't forget to free them.
   2616 */
   2617 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyVirtualBlock(
   2618     VmaVirtualBlock VMA_NULLABLE virtualBlock);
   2619 
   2620 /** \brief Returns true of the #VmaVirtualBlock is empty - contains 0 virtual allocations and has all its space available for new allocations.
   2621 */
   2622 VMA_CALL_PRE VkBool32 VMA_CALL_POST vmaIsVirtualBlockEmpty(
   2623     VmaVirtualBlock VMA_NOT_NULL virtualBlock);
   2624 
   2625 /** \brief Returns information about a specific virtual allocation within a virtual block, like its size and `pUserData` pointer.
   2626 */
   2627 VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualAllocationInfo(
   2628     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
   2629     VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation, VmaVirtualAllocationInfo* VMA_NOT_NULL pVirtualAllocInfo);
   2630 
   2631 /** \brief Allocates new virtual allocation inside given #VmaVirtualBlock.
   2632 
   2633 If the allocation fails due to not enough free space available, `VK_ERROR_OUT_OF_DEVICE_MEMORY` is returned
   2634 (despite the function doesn't ever allocate actual GPU memory).
   2635 `pAllocation` is then set to `VK_NULL_HANDLE` and `pOffset`, if not null, it set to `UINT64_MAX`.
   2636 
   2637 \param virtualBlock Virtual block
   2638 \param pCreateInfo Parameters for the allocation
   2639 \param[out] pAllocation Returned handle of the new allocation
   2640 \param[out] pOffset Returned offset of the new allocation. Optional, can be null.
   2641 */
   2642 VMA_CALL_PRE VkResult VMA_CALL_POST vmaVirtualAllocate(
   2643     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
   2644     const VmaVirtualAllocationCreateInfo* VMA_NOT_NULL pCreateInfo,
   2645     VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pAllocation,
   2646     VkDeviceSize* VMA_NULLABLE pOffset);
   2647 
   2648 /** \brief Frees virtual allocation inside given #VmaVirtualBlock.
   2649 
   2650 It is correct to call this function with `allocation == VK_NULL_HANDLE` - it does nothing.
   2651 */
   2652 VMA_CALL_PRE void VMA_CALL_POST vmaVirtualFree(
   2653     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
   2654     VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE allocation);
   2655 
   2656 /** \brief Frees all virtual allocations inside given #VmaVirtualBlock.
   2657 
   2658 You must either call this function or free each virtual allocation individually with vmaVirtualFree()
   2659 before destroying a virtual block. Otherwise, an assert is called.
   2660 
   2661 If you keep pointer to some additional metadata associated with your virtual allocation in its `pUserData`,
   2662 don't forget to free it as well.
   2663 */
   2664 VMA_CALL_PRE void VMA_CALL_POST vmaClearVirtualBlock(
   2665     VmaVirtualBlock VMA_NOT_NULL virtualBlock);
   2666 
   2667 /** \brief Changes custom pointer associated with given virtual allocation.
   2668 */
   2669 VMA_CALL_PRE void VMA_CALL_POST vmaSetVirtualAllocationUserData(
   2670     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
   2671     VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation,
   2672     void* VMA_NULLABLE pUserData);
   2673 
   2674 /** \brief Calculates and returns statistics about virtual allocations and memory usage in given #VmaVirtualBlock.
   2675 
   2676 This function is fast to call. For more detailed statistics, see vmaCalculateVirtualBlockStatistics().
   2677 */
   2678 VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualBlockStatistics(
   2679     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
   2680     VmaStatistics* VMA_NOT_NULL pStats);
   2681 
   2682 /** \brief Calculates and returns detailed statistics about virtual allocations and memory usage in given #VmaVirtualBlock.
   2683 
   2684 This function is slow to call. Use for debugging purposes.
   2685 For less detailed statistics, see vmaGetVirtualBlockStatistics().
   2686 */
   2687 VMA_CALL_PRE void VMA_CALL_POST vmaCalculateVirtualBlockStatistics(
   2688     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
   2689     VmaDetailedStatistics* VMA_NOT_NULL pStats);
   2690 
   2691 /** @} */
   2692 
   2693 #if VMA_STATS_STRING_ENABLED
   2694 /**
   2695 \addtogroup group_stats
   2696 @{
   2697 */
   2698 
   2699 /** \brief Builds and returns a null-terminated string in JSON format with information about given #VmaVirtualBlock.
   2700 \param virtualBlock Virtual block.
   2701 \param[out] ppStatsString Returned string.
   2702 \param detailedMap Pass `VK_FALSE` to only obtain statistics as returned by vmaCalculateVirtualBlockStatistics(). Pass `VK_TRUE` to also obtain full list of allocations and free spaces.
   2703 
   2704 Returned string must be freed using vmaFreeVirtualBlockStatsString().
   2705 */
   2706 VMA_CALL_PRE void VMA_CALL_POST vmaBuildVirtualBlockStatsString(
   2707     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
   2708     char* VMA_NULLABLE* VMA_NOT_NULL ppStatsString,
   2709     VkBool32 detailedMap);
   2710 
   2711 /// Frees a string returned by vmaBuildVirtualBlockStatsString().
   2712 VMA_CALL_PRE void VMA_CALL_POST vmaFreeVirtualBlockStatsString(
   2713     VmaVirtualBlock VMA_NOT_NULL virtualBlock,
   2714     char* VMA_NULLABLE pStatsString);
   2715 
   2716 /** \brief Builds and returns statistics as a null-terminated string in JSON format.
   2717 \param allocator
   2718 \param[out] ppStatsString Must be freed using vmaFreeStatsString() function.
   2719 \param detailedMap
   2720 */
   2721 VMA_CALL_PRE void VMA_CALL_POST vmaBuildStatsString(
   2722     VmaAllocator VMA_NOT_NULL allocator,
   2723     char* VMA_NULLABLE* VMA_NOT_NULL ppStatsString,
   2724     VkBool32 detailedMap);
   2725 
   2726 VMA_CALL_PRE void VMA_CALL_POST vmaFreeStatsString(
   2727     VmaAllocator VMA_NOT_NULL allocator,
   2728     char* VMA_NULLABLE pStatsString);
   2729 
   2730 /** @} */
   2731 
   2732 #endif // VMA_STATS_STRING_ENABLED
   2733 
   2734 #endif // _VMA_FUNCTION_HEADERS
   2735 
   2736 #ifdef __cplusplus
   2737 }
   2738 #endif
   2739 
   2740 #endif // AMD_VULKAN_MEMORY_ALLOCATOR_H
   2741 
   2742 ////////////////////////////////////////////////////////////////////////////////
   2743 ////////////////////////////////////////////////////////////////////////////////
   2744 //
   2745 //    IMPLEMENTATION
   2746 //
   2747 ////////////////////////////////////////////////////////////////////////////////
   2748 ////////////////////////////////////////////////////////////////////////////////
   2749 
   2750 // For Visual Studio IntelliSense.
   2751 #if defined(__cplusplus) && defined(__INTELLISENSE__)
   2752 #define VMA_IMPLEMENTATION
   2753 #endif
   2754 
   2755 #ifdef VMA_IMPLEMENTATION
   2756 #undef VMA_IMPLEMENTATION
   2757 
   2758 #include <cstdint>
   2759 #include <cstdlib>
   2760 #include <cstring>
   2761 #include <cinttypes>
   2762 #include <utility>
   2763 #include <type_traits>
   2764 
   2765 #if !defined(VMA_CPP20)
   2766     #if __cplusplus >= 202002L || _MSVC_LANG >= 202002L // C++20
   2767         #define VMA_CPP20 1
   2768     #else
   2769         #define VMA_CPP20 0
   2770     #endif
   2771 #endif
   2772 
   2773 #ifdef _MSC_VER
   2774     #include <intrin.h> // For functions like __popcnt, _BitScanForward etc.
   2775 #endif
   2776 #if VMA_CPP20
   2777     #include <bit>
   2778 #endif
   2779 
   2780 #if VMA_STATS_STRING_ENABLED
   2781     #include <cstdio> // For snprintf
   2782 #endif
   2783 
   2784 /*******************************************************************************
   2785 CONFIGURATION SECTION
   2786 
   2787 Define some of these macros before each #include of this header or change them
   2788 here if you need other then default behavior depending on your environment.
   2789 */
   2790 #ifndef _VMA_CONFIGURATION
   2791 
   2792 /*
   2793 Define this macro to 1 to make the library fetch pointers to Vulkan functions
   2794 internally, like:
   2795 
   2796     vulkanFunctions.vkAllocateMemory = &vkAllocateMemory;
   2797 */
   2798 #if !defined(VMA_STATIC_VULKAN_FUNCTIONS) && !defined(VK_NO_PROTOTYPES)
   2799     #define VMA_STATIC_VULKAN_FUNCTIONS 1
   2800 #endif
   2801 
   2802 /*
   2803 Define this macro to 1 to make the library fetch pointers to Vulkan functions
   2804 internally, like:
   2805 
   2806     vulkanFunctions.vkAllocateMemory = (PFN_vkAllocateMemory)vkGetDeviceProcAddr(device, "vkAllocateMemory");
   2807 
   2808 To use this feature in new versions of VMA you now have to pass
   2809 VmaVulkanFunctions::vkGetInstanceProcAddr and vkGetDeviceProcAddr as
   2810 VmaAllocatorCreateInfo::pVulkanFunctions. Other members can be null.
   2811 */
   2812 #if !defined(VMA_DYNAMIC_VULKAN_FUNCTIONS)
   2813     #define VMA_DYNAMIC_VULKAN_FUNCTIONS 1
   2814 #endif
   2815 
   2816 #ifndef VMA_USE_STL_SHARED_MUTEX
   2817     #if __cplusplus >= 201703L || _MSVC_LANG >= 201703L // C++17
   2818         #define VMA_USE_STL_SHARED_MUTEX 1
   2819     // Visual studio defines __cplusplus properly only when passed additional parameter: /Zc:__cplusplus
   2820     // Otherwise it is always 199711L, despite shared_mutex works since Visual Studio 2015 Update 2.
   2821     #elif defined(_MSC_FULL_VER) && _MSC_FULL_VER >= 190023918 && __cplusplus == 199711L && _MSVC_LANG >= 201703L
   2822         #define VMA_USE_STL_SHARED_MUTEX 1
   2823     #else
   2824         #define VMA_USE_STL_SHARED_MUTEX 0
   2825     #endif
   2826 #endif
   2827 
   2828 /*
   2829 Define this macro to include custom header files without having to edit this file directly, e.g.:
   2830 
   2831     // Inside of "my_vma_configuration_user_includes.h":
   2832 
   2833     #include "my_custom_assert.h" // for MY_CUSTOM_ASSERT
   2834     #include "my_custom_min.h" // for my_custom_min
   2835     #include <algorithm>
   2836     #include <mutex>
   2837 
   2838     // Inside a different file, which includes "vk_mem_alloc.h":
   2839 
   2840     #define VMA_CONFIGURATION_USER_INCLUDES_H "my_vma_configuration_user_includes.h"
   2841     #define VMA_ASSERT(expr) MY_CUSTOM_ASSERT(expr)
   2842     #define VMA_MIN(v1, v2)  (my_custom_min(v1, v2))
   2843     #include "vk_mem_alloc.h"
   2844     ...
   2845 
   2846 The following headers are used in this CONFIGURATION section only, so feel free to
   2847 remove them if not needed.
   2848 */
   2849 #if !defined(VMA_CONFIGURATION_USER_INCLUDES_H)
   2850     #include <cassert> // for assert
   2851     #include <algorithm> // for min, max, swap
   2852     #include <mutex>
   2853 #else
   2854     #include VMA_CONFIGURATION_USER_INCLUDES_H
   2855 #endif
   2856 
   2857 #ifndef VMA_NULL
   2858    // Value used as null pointer. Define it to e.g.: nullptr, NULL, 0, (void*)0.
   2859    #define VMA_NULL   nullptr
   2860 #endif
   2861 
   2862 #ifndef VMA_FALLTHROUGH
   2863     #if __cplusplus >= 201703L || _MSVC_LANG >= 201703L // C++17
   2864         #define VMA_FALLTHROUGH [[fallthrough]]
   2865     #else
   2866         #define VMA_FALLTHROUGH
   2867     #endif
   2868 #endif
   2869 
   2870 // Normal assert to check for programmer's errors, especially in Debug configuration.
   2871 #ifndef VMA_ASSERT
   2872    #ifdef NDEBUG
   2873        #define VMA_ASSERT(expr)
   2874    #else
   2875        #define VMA_ASSERT(expr)         assert(expr)
   2876    #endif
   2877 #endif
   2878 
   2879 // Assert that will be called very often, like inside data structures e.g. operator[].
   2880 // Making it non-empty can make program slow.
   2881 #ifndef VMA_HEAVY_ASSERT
   2882    #ifdef NDEBUG
   2883        #define VMA_HEAVY_ASSERT(expr)
   2884    #else
   2885        #define VMA_HEAVY_ASSERT(expr)   //VMA_ASSERT(expr)
   2886    #endif
   2887 #endif
   2888 
   2889 // Assert used for reporting memory leaks - unfreed allocations.
   2890 #ifndef VMA_ASSERT_LEAK
   2891     #define VMA_ASSERT_LEAK(expr)   VMA_ASSERT(expr)
   2892 #endif
   2893 
   2894 // If your compiler is not compatible with C++17 and definition of
   2895 // aligned_alloc() function is missing, uncommenting following line may help:
   2896 
   2897 //#include <malloc.h>
   2898 
   2899 #if defined(__ANDROID_API__) && (__ANDROID_API__ < 16)
   2900 #include <cstdlib>
   2901 static void* vma_aligned_alloc(size_t alignment, size_t size)
   2902 {
   2903     // alignment must be >= sizeof(void*)
   2904     if(alignment < sizeof(void*))
   2905     {
   2906         alignment = sizeof(void*);
   2907     }
   2908 
   2909     return memalign(alignment, size);
   2910 }
   2911 #elif defined(__APPLE__) || defined(__ANDROID__) || (defined(__linux__) && defined(__GLIBCXX__) && !defined(_GLIBCXX_HAVE_ALIGNED_ALLOC))
   2912 #include <cstdlib>
   2913 
   2914 #if defined(__APPLE__)
   2915 #include <AvailabilityMacros.h>
   2916 #endif
   2917 
   2918 static void* vma_aligned_alloc(size_t alignment, size_t size)
   2919 {
   2920     // Unfortunately, aligned_alloc causes VMA to crash due to it returning null pointers. (At least under 11.4)
   2921     // Therefore, for now disable this specific exception until a proper solution is found.
   2922     //#if defined(__APPLE__) && (defined(MAC_OS_X_VERSION_10_16) || defined(__IPHONE_14_0))
   2923     //#if MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_16 || __IPHONE_OS_VERSION_MAX_ALLOWED >= __IPHONE_14_0
   2924     //    // For C++14, usr/include/malloc/_malloc.h declares aligned_alloc()) only
   2925     //    // with the MacOSX11.0 SDK in Xcode 12 (which is what adds
   2926     //    // MAC_OS_X_VERSION_10_16), even though the function is marked
   2927     //    // available for 10.15. That is why the preprocessor checks for 10.16 but
   2928     //    // the __builtin_available checks for 10.15.
   2929     //    // People who use C++17 could call aligned_alloc with the 10.15 SDK already.
   2930     //    if (__builtin_available(macOS 10.15, iOS 13, *))
   2931     //        return aligned_alloc(alignment, size);
   2932     //#endif
   2933     //#endif
   2934 
   2935     // alignment must be >= sizeof(void*)
   2936     if(alignment < sizeof(void*))
   2937     {
   2938         alignment = sizeof(void*);
   2939     }
   2940 
   2941     void *pointer;
   2942     if(posix_memalign(&pointer, alignment, size) == 0)
   2943         return pointer;
   2944     return VMA_NULL;
   2945 }
   2946 #elif defined(_WIN32)
   2947 static void* vma_aligned_alloc(size_t alignment, size_t size)
   2948 {
   2949     return _aligned_malloc(size, alignment);
   2950 }
   2951 #elif __cplusplus >= 201703L || _MSVC_LANG >= 201703L // C++17
   2952 static void* vma_aligned_alloc(size_t alignment, size_t size)
   2953 {
   2954     return aligned_alloc(alignment, size);
   2955 }
   2956 #else
   2957 static void* vma_aligned_alloc(size_t alignment, size_t size)
   2958 {
   2959     VMA_ASSERT(0 && "Could not implement aligned_alloc automatically. Please enable C++17 or later in your compiler or provide custom implementation of macro VMA_SYSTEM_ALIGNED_MALLOC (and VMA_SYSTEM_ALIGNED_FREE if needed) using the API of your system.");
   2960     return VMA_NULL;
   2961 }
   2962 #endif
   2963 
   2964 #if defined(_WIN32)
   2965 static void vma_aligned_free(void* ptr)
   2966 {
   2967     _aligned_free(ptr);
   2968 }
   2969 #else
   2970 static void vma_aligned_free(void* VMA_NULLABLE ptr)
   2971 {
   2972     free(ptr);
   2973 }
   2974 #endif
   2975 
   2976 #ifndef VMA_ALIGN_OF
   2977    #define VMA_ALIGN_OF(type)       (alignof(type))
   2978 #endif
   2979 
   2980 #ifndef VMA_SYSTEM_ALIGNED_MALLOC
   2981    #define VMA_SYSTEM_ALIGNED_MALLOC(size, alignment) vma_aligned_alloc((alignment), (size))
   2982 #endif
   2983 
   2984 #ifndef VMA_SYSTEM_ALIGNED_FREE
   2985    // VMA_SYSTEM_FREE is the old name, but might have been defined by the user
   2986    #if defined(VMA_SYSTEM_FREE)
   2987       #define VMA_SYSTEM_ALIGNED_FREE(ptr)     VMA_SYSTEM_FREE(ptr)
   2988    #else
   2989       #define VMA_SYSTEM_ALIGNED_FREE(ptr)     vma_aligned_free(ptr)
   2990     #endif
   2991 #endif
   2992 
   2993 #ifndef VMA_COUNT_BITS_SET
   2994     // Returns number of bits set to 1 in (v)
   2995     #define VMA_COUNT_BITS_SET(v) VmaCountBitsSet(v)
   2996 #endif
   2997 
   2998 #ifndef VMA_BITSCAN_LSB
   2999     // Scans integer for index of first nonzero value from the Least Significant Bit (LSB). If mask is 0 then returns UINT8_MAX
   3000     #define VMA_BITSCAN_LSB(mask) VmaBitScanLSB(mask)
   3001 #endif
   3002 
   3003 #ifndef VMA_BITSCAN_MSB
   3004     // Scans integer for index of first nonzero value from the Most Significant Bit (MSB). If mask is 0 then returns UINT8_MAX
   3005     #define VMA_BITSCAN_MSB(mask) VmaBitScanMSB(mask)
   3006 #endif
   3007 
   3008 #ifndef VMA_MIN
   3009    #define VMA_MIN(v1, v2)    ((std::min)((v1), (v2)))
   3010 #endif
   3011 
   3012 #ifndef VMA_MAX
   3013    #define VMA_MAX(v1, v2)    ((std::max)((v1), (v2)))
   3014 #endif
   3015 
   3016 #ifndef VMA_SORT
   3017    #define VMA_SORT(beg, end, cmp)  std::sort(beg, end, cmp)
   3018 #endif
   3019 
   3020 #ifndef VMA_DEBUG_LOG_FORMAT
   3021    #define VMA_DEBUG_LOG_FORMAT(format, ...)
   3022    /*
   3023    #define VMA_DEBUG_LOG_FORMAT(format, ...) do { \
   3024        printf((format), __VA_ARGS__); \
   3025        printf("\n"); \
   3026    } while(false)
   3027    */
   3028 #endif
   3029 
   3030 #ifndef VMA_DEBUG_LOG
   3031     #define VMA_DEBUG_LOG(str)   VMA_DEBUG_LOG_FORMAT("%s", (str))
   3032 #endif
   3033 
   3034 #ifndef VMA_LEAK_LOG_FORMAT
   3035     #define VMA_LEAK_LOG_FORMAT(format, ...)   VMA_DEBUG_LOG_FORMAT(format, __VA_ARGS__)
   3036 #endif
   3037 
   3038 #ifndef VMA_CLASS_NO_COPY
   3039     #define VMA_CLASS_NO_COPY(className) \
   3040         private: \
   3041             className(const className&) = delete; \
   3042             className& operator=(const className&) = delete;
   3043 #endif
   3044 #ifndef VMA_CLASS_NO_COPY_NO_MOVE
   3045     #define VMA_CLASS_NO_COPY_NO_MOVE(className) \
   3046         private: \
   3047             className(const className&) = delete; \
   3048             className(className&&) = delete; \
   3049             className& operator=(const className&) = delete; \
   3050             className& operator=(className&&) = delete;
   3051 #endif
   3052 
   3053 // Define this macro to 1 to enable functions: vmaBuildStatsString, vmaFreeStatsString.
   3054 #if VMA_STATS_STRING_ENABLED
   3055     static inline void VmaUint32ToStr(char* VMA_NOT_NULL outStr, size_t strLen, uint32_t num)
   3056     {
   3057         snprintf(outStr, strLen, "%" PRIu32, num);
   3058     }
   3059     static inline void VmaUint64ToStr(char* VMA_NOT_NULL outStr, size_t strLen, uint64_t num)
   3060     {
   3061         snprintf(outStr, strLen, "%" PRIu64, num);
   3062     }
   3063     static inline void VmaPtrToStr(char* VMA_NOT_NULL outStr, size_t strLen, const void* ptr)
   3064     {
   3065         snprintf(outStr, strLen, "%p", ptr);
   3066     }
   3067 #endif
   3068 
   3069 #ifndef VMA_MUTEX
   3070     class VmaMutex
   3071     {
   3072     VMA_CLASS_NO_COPY_NO_MOVE(VmaMutex)
   3073     public:
   3074         VmaMutex() { }
   3075         void Lock() { m_Mutex.lock(); }
   3076         void Unlock() { m_Mutex.unlock(); }
   3077         bool TryLock() { return m_Mutex.try_lock(); }
   3078     private:
   3079         std::mutex m_Mutex;
   3080     };
   3081     #define VMA_MUTEX VmaMutex
   3082 #endif
   3083 
   3084 // Read-write mutex, where "read" is shared access, "write" is exclusive access.
   3085 #ifndef VMA_RW_MUTEX
   3086     #if VMA_USE_STL_SHARED_MUTEX
   3087         // Use std::shared_mutex from C++17.
   3088         #include <shared_mutex>
   3089         class VmaRWMutex
   3090         {
   3091         public:
   3092             void LockRead() { m_Mutex.lock_shared(); }
   3093             void UnlockRead() { m_Mutex.unlock_shared(); }
   3094             bool TryLockRead() { return m_Mutex.try_lock_shared(); }
   3095             void LockWrite() { m_Mutex.lock(); }
   3096             void UnlockWrite() { m_Mutex.unlock(); }
   3097             bool TryLockWrite() { return m_Mutex.try_lock(); }
   3098         private:
   3099             std::shared_mutex m_Mutex;
   3100         };
   3101         #define VMA_RW_MUTEX VmaRWMutex
   3102     #elif defined(_WIN32) && defined(WINVER) && WINVER >= 0x0600
   3103         // Use SRWLOCK from WinAPI.
   3104         // Minimum supported client = Windows Vista, server = Windows Server 2008.
   3105         class VmaRWMutex
   3106         {
   3107         public:
   3108             VmaRWMutex() { InitializeSRWLock(&m_Lock); }
   3109             void LockRead() { AcquireSRWLockShared(&m_Lock); }
   3110             void UnlockRead() { ReleaseSRWLockShared(&m_Lock); }
   3111             bool TryLockRead() { return TryAcquireSRWLockShared(&m_Lock) != FALSE; }
   3112             void LockWrite() { AcquireSRWLockExclusive(&m_Lock); }
   3113             void UnlockWrite() { ReleaseSRWLockExclusive(&m_Lock); }
   3114             bool TryLockWrite() { return TryAcquireSRWLockExclusive(&m_Lock) != FALSE; }
   3115         private:
   3116             SRWLOCK m_Lock;
   3117         };
   3118         #define VMA_RW_MUTEX VmaRWMutex
   3119     #else
   3120         // Less efficient fallback: Use normal mutex.
   3121         class VmaRWMutex
   3122         {
   3123         public:
   3124             void LockRead() { m_Mutex.Lock(); }
   3125             void UnlockRead() { m_Mutex.Unlock(); }
   3126             bool TryLockRead() { return m_Mutex.TryLock(); }
   3127             void LockWrite() { m_Mutex.Lock(); }
   3128             void UnlockWrite() { m_Mutex.Unlock(); }
   3129             bool TryLockWrite() { return m_Mutex.TryLock(); }
   3130         private:
   3131             VMA_MUTEX m_Mutex;
   3132         };
   3133         #define VMA_RW_MUTEX VmaRWMutex
   3134     #endif // #if VMA_USE_STL_SHARED_MUTEX
   3135 #endif // #ifndef VMA_RW_MUTEX
   3136 
   3137 /*
   3138 If providing your own implementation, you need to implement a subset of std::atomic.
   3139 */
   3140 #ifndef VMA_ATOMIC_UINT32
   3141     #include <atomic>
   3142     #define VMA_ATOMIC_UINT32 std::atomic<uint32_t>
   3143 #endif
   3144 
   3145 #ifndef VMA_ATOMIC_UINT64
   3146     #include <atomic>
   3147     #define VMA_ATOMIC_UINT64 std::atomic<uint64_t>
   3148 #endif
   3149 
   3150 #ifndef VMA_DEBUG_ALWAYS_DEDICATED_MEMORY
   3151     /**
   3152     Every allocation will have its own memory block.
   3153     Define to 1 for debugging purposes only.
   3154     */
   3155     #define VMA_DEBUG_ALWAYS_DEDICATED_MEMORY (0)
   3156 #endif
   3157 
   3158 #ifndef VMA_MIN_ALIGNMENT
   3159     /**
   3160     Minimum alignment of all allocations, in bytes.
   3161     Set to more than 1 for debugging purposes. Must be power of two.
   3162     */
   3163     #ifdef VMA_DEBUG_ALIGNMENT // Old name
   3164         #define VMA_MIN_ALIGNMENT VMA_DEBUG_ALIGNMENT
   3165     #else
   3166         #define VMA_MIN_ALIGNMENT (1)
   3167     #endif
   3168 #endif
   3169 
   3170 #ifndef VMA_DEBUG_MARGIN
   3171     /**
   3172     Minimum margin after every allocation, in bytes.
   3173     Set nonzero for debugging purposes only.
   3174     */
   3175     #define VMA_DEBUG_MARGIN (0)
   3176 #endif
   3177 
   3178 #ifndef VMA_DEBUG_INITIALIZE_ALLOCATIONS
   3179     /**
   3180     Define this macro to 1 to automatically fill new allocations and destroyed
   3181     allocations with some bit pattern.
   3182     */
   3183     #define VMA_DEBUG_INITIALIZE_ALLOCATIONS (0)
   3184 #endif
   3185 
   3186 #ifndef VMA_DEBUG_DETECT_CORRUPTION
   3187     /**
   3188     Define this macro to 1 together with non-zero value of VMA_DEBUG_MARGIN to
   3189     enable writing magic value to the margin after every allocation and
   3190     validating it, so that memory corruptions (out-of-bounds writes) are detected.
   3191     */
   3192     #define VMA_DEBUG_DETECT_CORRUPTION (0)
   3193 #endif
   3194 
   3195 #ifndef VMA_DEBUG_GLOBAL_MUTEX
   3196     /**
   3197     Set this to 1 for debugging purposes only, to enable single mutex protecting all
   3198     entry calls to the library. Can be useful for debugging multithreading issues.
   3199     */
   3200     #define VMA_DEBUG_GLOBAL_MUTEX (0)
   3201 #endif
   3202 
   3203 #ifndef VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY
   3204     /**
   3205     Minimum value for VkPhysicalDeviceLimits::bufferImageGranularity.
   3206     Set to more than 1 for debugging purposes only. Must be power of two.
   3207     */
   3208     #define VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY (1)
   3209 #endif
   3210 
   3211 #ifndef VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT
   3212     /*
   3213     Set this to 1 to make VMA never exceed VkPhysicalDeviceLimits::maxMemoryAllocationCount
   3214     and return error instead of leaving up to Vulkan implementation what to do in such cases.
   3215     */
   3216     #define VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT (0)
   3217 #endif
   3218 
   3219 #ifndef VMA_SMALL_HEAP_MAX_SIZE
   3220    /// Maximum size of a memory heap in Vulkan to consider it "small".
   3221    #define VMA_SMALL_HEAP_MAX_SIZE (1024ull * 1024 * 1024)
   3222 #endif
   3223 
   3224 #ifndef VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE
   3225    /// Default size of a block allocated as single VkDeviceMemory from a "large" heap.
   3226    #define VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE (256ull * 1024 * 1024)
   3227 #endif
   3228 
   3229 /*
   3230 Mapping hysteresis is a logic that launches when vmaMapMemory/vmaUnmapMemory is called
   3231 or a persistently mapped allocation is created and destroyed several times in a row.
   3232 It keeps additional +1 mapping of a device memory block to prevent calling actual
   3233 vkMapMemory/vkUnmapMemory too many times, which may improve performance and help
   3234 tools like RenderDoc.
   3235 */
   3236 #ifndef VMA_MAPPING_HYSTERESIS_ENABLED
   3237     #define VMA_MAPPING_HYSTERESIS_ENABLED 1
   3238 #endif
   3239 
   3240 #define VMA_VALIDATE(cond) do { if(!(cond)) { \
   3241         VMA_ASSERT(0 && "Validation failed: " #cond); \
   3242         return false; \
   3243     } } while(false)
   3244 
   3245 /*******************************************************************************
   3246 END OF CONFIGURATION
   3247 */
   3248 #endif // _VMA_CONFIGURATION
   3249 
   3250 
   3251 static const uint8_t VMA_ALLOCATION_FILL_PATTERN_CREATED = 0xDC;
   3252 static const uint8_t VMA_ALLOCATION_FILL_PATTERN_DESTROYED = 0xEF;
   3253 // Decimal 2139416166, float NaN, little-endian binary 66 E6 84 7F.
   3254 static const uint32_t VMA_CORRUPTION_DETECTION_MAGIC_VALUE = 0x7F84E666;
   3255 
   3256 // Copy of some Vulkan definitions so we don't need to check their existence just to handle few constants.
   3257 static const uint32_t VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY = 0x00000040;
   3258 static const uint32_t VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY = 0x00000080;
   3259 static const uint32_t VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY = 0x00020000;
   3260 static const uint32_t VK_IMAGE_CREATE_DISJOINT_BIT_COPY = 0x00000200;
   3261 static const int32_t VK_IMAGE_TILING_DRM_FORMAT_MODIFIER_EXT_COPY = 1000158000;
   3262 static const uint32_t VMA_ALLOCATION_INTERNAL_STRATEGY_MIN_OFFSET = 0x10000000u;
   3263 static const uint32_t VMA_ALLOCATION_TRY_COUNT = 32;
   3264 static const uint32_t VMA_VENDOR_ID_AMD = 4098;
   3265 
   3266 // This one is tricky. Vulkan specification defines this code as available since
   3267 // Vulkan 1.0, but doesn't actually define it in Vulkan SDK earlier than 1.2.131.
   3268 // See pull request #207.
   3269 #define VK_ERROR_UNKNOWN_COPY ((VkResult)-13)
   3270 
   3271 
   3272 #if VMA_STATS_STRING_ENABLED
   3273 // Correspond to values of enum VmaSuballocationType.
   3274 static const char* VMA_SUBALLOCATION_TYPE_NAMES[] =
   3275 {
   3276     "FREE",
   3277     "UNKNOWN",
   3278     "BUFFER",
   3279     "IMAGE_UNKNOWN",
   3280     "IMAGE_LINEAR",
   3281     "IMAGE_OPTIMAL",
   3282 };
   3283 #endif
   3284 
   3285 static VkAllocationCallbacks VmaEmptyAllocationCallbacks =
   3286     { VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL, VMA_NULL };
   3287 
   3288 
   3289 #ifndef _VMA_ENUM_DECLARATIONS
   3290 
   3291 enum VmaSuballocationType
   3292 {
   3293     VMA_SUBALLOCATION_TYPE_FREE = 0,
   3294     VMA_SUBALLOCATION_TYPE_UNKNOWN = 1,
   3295     VMA_SUBALLOCATION_TYPE_BUFFER = 2,
   3296     VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN = 3,
   3297     VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR = 4,
   3298     VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL = 5,
   3299     VMA_SUBALLOCATION_TYPE_MAX_ENUM = 0x7FFFFFFF
   3300 };
   3301 
   3302 enum VMA_CACHE_OPERATION
   3303 {
   3304     VMA_CACHE_FLUSH,
   3305     VMA_CACHE_INVALIDATE
   3306 };
   3307 
   3308 enum class VmaAllocationRequestType
   3309 {
   3310     Normal,
   3311     TLSF,
   3312     // Used by "Linear" algorithm.
   3313     UpperAddress,
   3314     EndOf1st,
   3315     EndOf2nd,
   3316 };
   3317 
   3318 #endif // _VMA_ENUM_DECLARATIONS
   3319 
   3320 #ifndef _VMA_FORWARD_DECLARATIONS
   3321 // Opaque handle used by allocation algorithms to identify single allocation in any conforming way.
   3322 VK_DEFINE_NON_DISPATCHABLE_HANDLE(VmaAllocHandle);
   3323 
   3324 struct VmaMutexLock;
   3325 struct VmaMutexLockRead;
   3326 struct VmaMutexLockWrite;
   3327 
   3328 template<typename T>
   3329 struct AtomicTransactionalIncrement;
   3330 
   3331 template<typename T>
   3332 struct VmaStlAllocator;
   3333 
   3334 template<typename T, typename AllocatorT>
   3335 class VmaVector;
   3336 
   3337 template<typename T, typename AllocatorT, size_t N>
   3338 class VmaSmallVector;
   3339 
   3340 template<typename T>
   3341 class VmaPoolAllocator;
   3342 
   3343 template<typename T>
   3344 struct VmaListItem;
   3345 
   3346 template<typename T>
   3347 class VmaRawList;
   3348 
   3349 template<typename T, typename AllocatorT>
   3350 class VmaList;
   3351 
   3352 template<typename ItemTypeTraits>
   3353 class VmaIntrusiveLinkedList;
   3354 
   3355 #if VMA_STATS_STRING_ENABLED
   3356 class VmaStringBuilder;
   3357 class VmaJsonWriter;
   3358 #endif
   3359 
   3360 class VmaDeviceMemoryBlock;
   3361 
   3362 struct VmaDedicatedAllocationListItemTraits;
   3363 class VmaDedicatedAllocationList;
   3364 
   3365 struct VmaSuballocation;
   3366 struct VmaSuballocationOffsetLess;
   3367 struct VmaSuballocationOffsetGreater;
   3368 struct VmaSuballocationItemSizeLess;
   3369 
   3370 typedef VmaList<VmaSuballocation, VmaStlAllocator<VmaSuballocation>> VmaSuballocationList;
   3371 
   3372 struct VmaAllocationRequest;
   3373 
   3374 class VmaBlockMetadata;
   3375 class VmaBlockMetadata_Linear;
   3376 class VmaBlockMetadata_TLSF;
   3377 
   3378 class VmaBlockVector;
   3379 
   3380 struct VmaPoolListItemTraits;
   3381 
   3382 struct VmaCurrentBudgetData;
   3383 
   3384 class VmaAllocationObjectAllocator;
   3385 
   3386 #endif // _VMA_FORWARD_DECLARATIONS
   3387 
   3388 
   3389 #ifndef _VMA_FUNCTIONS
   3390 
   3391 /*
   3392 Returns number of bits set to 1 in (v).
   3393 
   3394 On specific platforms and compilers you can use intrinsics like:
   3395 
   3396 Visual Studio:
   3397     return __popcnt(v);
   3398 GCC, Clang:
   3399     return static_cast<uint32_t>(__builtin_popcount(v));
   3400 
   3401 Define macro VMA_COUNT_BITS_SET to provide your optimized implementation.
   3402 But you need to check in runtime whether user's CPU supports these, as some old processors don't.
   3403 */
   3404 static inline uint32_t VmaCountBitsSet(uint32_t v)
   3405 {
   3406 #if VMA_CPP20
   3407     return std::popcount(v);
   3408 #else
   3409     uint32_t c = v - ((v >> 1) & 0x55555555);
   3410     c = ((c >> 2) & 0x33333333) + (c & 0x33333333);
   3411     c = ((c >> 4) + c) & 0x0F0F0F0F;
   3412     c = ((c >> 8) + c) & 0x00FF00FF;
   3413     c = ((c >> 16) + c) & 0x0000FFFF;
   3414     return c;
   3415 #endif
   3416 }
   3417 
   3418 static inline uint8_t VmaBitScanLSB(uint64_t mask)
   3419 {
   3420 #if defined(_MSC_VER) && defined(_WIN64)
   3421     unsigned long pos;
   3422     if (_BitScanForward64(&pos, mask))
   3423         return static_cast<uint8_t>(pos);
   3424     return UINT8_MAX;
   3425 #elif VMA_CPP20
   3426     if(mask)
   3427         return static_cast<uint8_t>(std::countr_zero(mask));
   3428     return UINT8_MAX;
   3429 #elif defined __GNUC__ || defined __clang__
   3430     return static_cast<uint8_t>(__builtin_ffsll(mask)) - 1U;
   3431 #else
   3432     uint8_t pos = 0;
   3433     uint64_t bit = 1;
   3434     do
   3435     {
   3436         if (mask & bit)
   3437             return pos;
   3438         bit <<= 1;
   3439     } while (pos++ < 63);
   3440     return UINT8_MAX;
   3441 #endif
   3442 }
   3443 
   3444 static inline uint8_t VmaBitScanLSB(uint32_t mask)
   3445 {
   3446 #ifdef _MSC_VER
   3447     unsigned long pos;
   3448     if (_BitScanForward(&pos, mask))
   3449         return static_cast<uint8_t>(pos);
   3450     return UINT8_MAX;
   3451 #elif VMA_CPP20
   3452     if(mask)
   3453         return static_cast<uint8_t>(std::countr_zero(mask));
   3454     return UINT8_MAX;
   3455 #elif defined __GNUC__ || defined __clang__
   3456     return static_cast<uint8_t>(__builtin_ffs(mask)) - 1U;
   3457 #else
   3458     uint8_t pos = 0;
   3459     uint32_t bit = 1;
   3460     do
   3461     {
   3462         if (mask & bit)
   3463             return pos;
   3464         bit <<= 1;
   3465     } while (pos++ < 31);
   3466     return UINT8_MAX;
   3467 #endif
   3468 }
   3469 
   3470 static inline uint8_t VmaBitScanMSB(uint64_t mask)
   3471 {
   3472 #if defined(_MSC_VER) && defined(_WIN64)
   3473     unsigned long pos;
   3474     if (_BitScanReverse64(&pos, mask))
   3475         return static_cast<uint8_t>(pos);
   3476 #elif VMA_CPP20
   3477     if(mask)
   3478         return 63 - static_cast<uint8_t>(std::countl_zero(mask));
   3479 #elif defined __GNUC__ || defined __clang__
   3480     if (mask)
   3481         return 63 - static_cast<uint8_t>(__builtin_clzll(mask));
   3482 #else
   3483     uint8_t pos = 63;
   3484     uint64_t bit = 1ULL << 63;
   3485     do
   3486     {
   3487         if (mask & bit)
   3488             return pos;
   3489         bit >>= 1;
   3490     } while (pos-- > 0);
   3491 #endif
   3492     return UINT8_MAX;
   3493 }
   3494 
   3495 static inline uint8_t VmaBitScanMSB(uint32_t mask)
   3496 {
   3497 #ifdef _MSC_VER
   3498     unsigned long pos;
   3499     if (_BitScanReverse(&pos, mask))
   3500         return static_cast<uint8_t>(pos);
   3501 #elif VMA_CPP20
   3502     if(mask)
   3503         return 31 - static_cast<uint8_t>(std::countl_zero(mask));
   3504 #elif defined __GNUC__ || defined __clang__
   3505     if (mask)
   3506         return 31 - static_cast<uint8_t>(__builtin_clz(mask));
   3507 #else
   3508     uint8_t pos = 31;
   3509     uint32_t bit = 1UL << 31;
   3510     do
   3511     {
   3512         if (mask & bit)
   3513             return pos;
   3514         bit >>= 1;
   3515     } while (pos-- > 0);
   3516 #endif
   3517     return UINT8_MAX;
   3518 }
   3519 
   3520 /*
   3521 Returns true if given number is a power of two.
   3522 T must be unsigned integer number or signed integer but always nonnegative.
   3523 For 0 returns true.
   3524 */
   3525 template <typename T>
   3526 inline bool VmaIsPow2(T x)
   3527 {
   3528     return (x & (x - 1)) == 0;
   3529 }
   3530 
   3531 // Aligns given value up to nearest multiply of align value. For example: VmaAlignUp(11, 8) = 16.
   3532 // Use types like uint32_t, uint64_t as T.
   3533 template <typename T>
   3534 static inline T VmaAlignUp(T val, T alignment)
   3535 {
   3536     VMA_HEAVY_ASSERT(VmaIsPow2(alignment));
   3537     return (val + alignment - 1) & ~(alignment - 1);
   3538 }
   3539 
   3540 // Aligns given value down to nearest multiply of align value. For example: VmaAlignDown(11, 8) = 8.
   3541 // Use types like uint32_t, uint64_t as T.
   3542 template <typename T>
   3543 static inline T VmaAlignDown(T val, T alignment)
   3544 {
   3545     VMA_HEAVY_ASSERT(VmaIsPow2(alignment));
   3546     return val & ~(alignment - 1);
   3547 }
   3548 
   3549 // Division with mathematical rounding to nearest number.
   3550 template <typename T>
   3551 static inline T VmaRoundDiv(T x, T y)
   3552 {
   3553     return (x + (y / (T)2)) / y;
   3554 }
   3555 
   3556 // Divide by 'y' and round up to nearest integer.
   3557 template <typename T>
   3558 static inline T VmaDivideRoundingUp(T x, T y)
   3559 {
   3560     return (x + y - (T)1) / y;
   3561 }
   3562 
   3563 // Returns smallest power of 2 greater or equal to v.
   3564 static inline uint32_t VmaNextPow2(uint32_t v)
   3565 {
   3566     v--;
   3567     v |= v >> 1;
   3568     v |= v >> 2;
   3569     v |= v >> 4;
   3570     v |= v >> 8;
   3571     v |= v >> 16;
   3572     v++;
   3573     return v;
   3574 }
   3575 
   3576 static inline uint64_t VmaNextPow2(uint64_t v)
   3577 {
   3578     v--;
   3579     v |= v >> 1;
   3580     v |= v >> 2;
   3581     v |= v >> 4;
   3582     v |= v >> 8;
   3583     v |= v >> 16;
   3584     v |= v >> 32;
   3585     v++;
   3586     return v;
   3587 }
   3588 
   3589 // Returns largest power of 2 less or equal to v.
   3590 static inline uint32_t VmaPrevPow2(uint32_t v)
   3591 {
   3592     v |= v >> 1;
   3593     v |= v >> 2;
   3594     v |= v >> 4;
   3595     v |= v >> 8;
   3596     v |= v >> 16;
   3597     v = v ^ (v >> 1);
   3598     return v;
   3599 }
   3600 
   3601 static inline uint64_t VmaPrevPow2(uint64_t v)
   3602 {
   3603     v |= v >> 1;
   3604     v |= v >> 2;
   3605     v |= v >> 4;
   3606     v |= v >> 8;
   3607     v |= v >> 16;
   3608     v |= v >> 32;
   3609     v = v ^ (v >> 1);
   3610     return v;
   3611 }
   3612 
   3613 static inline bool VmaStrIsEmpty(const char* pStr)
   3614 {
   3615     return pStr == VMA_NULL || *pStr == '\0';
   3616 }
   3617 
   3618 /*
   3619 Returns true if two memory blocks occupy overlapping pages.
   3620 ResourceA must be in less memory offset than ResourceB.
   3621 
   3622 Algorithm is based on "Vulkan 1.0.39 - A Specification (with all registered Vulkan extensions)"
   3623 chapter 11.6 "Resource Memory Association", paragraph "Buffer-Image Granularity".
   3624 */
   3625 static inline bool VmaBlocksOnSamePage(
   3626     VkDeviceSize resourceAOffset,
   3627     VkDeviceSize resourceASize,
   3628     VkDeviceSize resourceBOffset,
   3629     VkDeviceSize pageSize)
   3630 {
   3631     VMA_ASSERT(resourceAOffset + resourceASize <= resourceBOffset && resourceASize > 0 && pageSize > 0);
   3632     VkDeviceSize resourceAEnd = resourceAOffset + resourceASize - 1;
   3633     VkDeviceSize resourceAEndPage = resourceAEnd & ~(pageSize - 1);
   3634     VkDeviceSize resourceBStart = resourceBOffset;
   3635     VkDeviceSize resourceBStartPage = resourceBStart & ~(pageSize - 1);
   3636     return resourceAEndPage == resourceBStartPage;
   3637 }
   3638 
   3639 /*
   3640 Returns true if given suballocation types could conflict and must respect
   3641 VkPhysicalDeviceLimits::bufferImageGranularity. They conflict if one is buffer
   3642 or linear image and another one is optimal image. If type is unknown, behave
   3643 conservatively.
   3644 */
   3645 static inline bool VmaIsBufferImageGranularityConflict(
   3646     VmaSuballocationType suballocType1,
   3647     VmaSuballocationType suballocType2)
   3648 {
   3649     if (suballocType1 > suballocType2)
   3650     {
   3651         std::swap(suballocType1, suballocType2);
   3652     }
   3653 
   3654     switch (suballocType1)
   3655     {
   3656     case VMA_SUBALLOCATION_TYPE_FREE:
   3657         return false;
   3658     case VMA_SUBALLOCATION_TYPE_UNKNOWN:
   3659         return true;
   3660     case VMA_SUBALLOCATION_TYPE_BUFFER:
   3661         return
   3662             suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
   3663             suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
   3664     case VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN:
   3665         return
   3666             suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
   3667             suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR ||
   3668             suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
   3669     case VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR:
   3670         return
   3671             suballocType2 == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL;
   3672     case VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL:
   3673         return false;
   3674     default:
   3675         VMA_ASSERT(0);
   3676         return true;
   3677     }
   3678 }
   3679 
   3680 static void VmaWriteMagicValue(void* pData, VkDeviceSize offset)
   3681 {
   3682 #if VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_DETECT_CORRUPTION
   3683     uint32_t* pDst = (uint32_t*)((char*)pData + offset);
   3684     const size_t numberCount = VMA_DEBUG_MARGIN / sizeof(uint32_t);
   3685     for (size_t i = 0; i < numberCount; ++i, ++pDst)
   3686     {
   3687         *pDst = VMA_CORRUPTION_DETECTION_MAGIC_VALUE;
   3688     }
   3689 #else
   3690     // no-op
   3691 #endif
   3692 }
   3693 
   3694 static bool VmaValidateMagicValue(const void* pData, VkDeviceSize offset)
   3695 {
   3696 #if VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_DETECT_CORRUPTION
   3697     const uint32_t* pSrc = (const uint32_t*)((const char*)pData + offset);
   3698     const size_t numberCount = VMA_DEBUG_MARGIN / sizeof(uint32_t);
   3699     for (size_t i = 0; i < numberCount; ++i, ++pSrc)
   3700     {
   3701         if (*pSrc != VMA_CORRUPTION_DETECTION_MAGIC_VALUE)
   3702         {
   3703             return false;
   3704         }
   3705     }
   3706 #endif
   3707     return true;
   3708 }
   3709 
   3710 /*
   3711 Fills structure with parameters of an example buffer to be used for transfers
   3712 during GPU memory defragmentation.
   3713 */
   3714 static void VmaFillGpuDefragmentationBufferCreateInfo(VkBufferCreateInfo& outBufCreateInfo)
   3715 {
   3716     memset(&outBufCreateInfo, 0, sizeof(outBufCreateInfo));
   3717     outBufCreateInfo.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
   3718     outBufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
   3719     outBufCreateInfo.size = (VkDeviceSize)VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE; // Example size.
   3720 }
   3721 
   3722 
   3723 /*
   3724 Performs binary search and returns iterator to first element that is greater or
   3725 equal to (key), according to comparison (cmp).
   3726 
   3727 Cmp should return true if first argument is less than second argument.
   3728 
   3729 Returned value is the found element, if present in the collection or place where
   3730 new element with value (key) should be inserted.
   3731 */
   3732 template <typename CmpLess, typename IterT, typename KeyT>
   3733 static IterT VmaBinaryFindFirstNotLess(IterT beg, IterT end, const KeyT& key, const CmpLess& cmp)
   3734 {
   3735     size_t down = 0, up = size_t(end - beg);
   3736     while (down < up)
   3737     {
   3738         const size_t mid = down + (up - down) / 2;  // Overflow-safe midpoint calculation
   3739         if (cmp(*(beg + mid), key))
   3740         {
   3741             down = mid + 1;
   3742         }
   3743         else
   3744         {
   3745             up = mid;
   3746         }
   3747     }
   3748     return beg + down;
   3749 }
   3750 
   3751 template<typename CmpLess, typename IterT, typename KeyT>
   3752 IterT VmaBinaryFindSorted(const IterT& beg, const IterT& end, const KeyT& value, const CmpLess& cmp)
   3753 {
   3754     IterT it = VmaBinaryFindFirstNotLess<CmpLess, IterT, KeyT>(
   3755         beg, end, value, cmp);
   3756     if (it == end ||
   3757         (!cmp(*it, value) && !cmp(value, *it)))
   3758     {
   3759         return it;
   3760     }
   3761     return end;
   3762 }
   3763 
   3764 /*
   3765 Returns true if all pointers in the array are not-null and unique.
   3766 Warning! O(n^2) complexity. Use only inside VMA_HEAVY_ASSERT.
   3767 T must be pointer type, e.g. VmaAllocation, VmaPool.
   3768 */
   3769 template<typename T>
   3770 static bool VmaValidatePointerArray(uint32_t count, const T* arr)
   3771 {
   3772     for (uint32_t i = 0; i < count; ++i)
   3773     {
   3774         const T iPtr = arr[i];
   3775         if (iPtr == VMA_NULL)
   3776         {
   3777             return false;
   3778         }
   3779         for (uint32_t j = i + 1; j < count; ++j)
   3780         {
   3781             if (iPtr == arr[j])
   3782             {
   3783                 return false;
   3784             }
   3785         }
   3786     }
   3787     return true;
   3788 }
   3789 
   3790 template<typename MainT, typename NewT>
   3791 static inline void VmaPnextChainPushFront(MainT* mainStruct, NewT* newStruct)
   3792 {
   3793     newStruct->pNext = mainStruct->pNext;
   3794     mainStruct->pNext = newStruct;
   3795 }
   3796 // Finds structure with s->sType == sType in mainStruct->pNext chain.
   3797 // Returns pointer to it. If not found, returns null.
   3798 template<typename FindT, typename MainT>
   3799 static inline const FindT* VmaPnextChainFind(const MainT* mainStruct, VkStructureType sType)
   3800 {
   3801     for(const VkBaseInStructure* s = (const VkBaseInStructure*)mainStruct->pNext;
   3802         s != VMA_NULL; s = s->pNext)
   3803     {
   3804         if(s->sType == sType)
   3805         {
   3806             return (const FindT*)s;
   3807         }
   3808     }
   3809     return VMA_NULL;
   3810 }
   3811 
   3812 // An abstraction over buffer or image `usage` flags, depending on available extensions.
   3813 struct VmaBufferImageUsage
   3814 {
   3815 #if VMA_KHR_MAINTENANCE5
   3816     typedef uint64_t BaseType; // VkFlags64
   3817 #else
   3818     typedef uint32_t BaseType; // VkFlags32
   3819 #endif
   3820 
   3821     static const VmaBufferImageUsage UNKNOWN;
   3822 
   3823     BaseType Value;
   3824 
   3825     VmaBufferImageUsage() { *this = UNKNOWN; }
   3826     explicit VmaBufferImageUsage(BaseType usage) : Value(usage) { }
   3827     VmaBufferImageUsage(const VkBufferCreateInfo &createInfo, bool useKhrMaintenance5);
   3828     explicit VmaBufferImageUsage(const VkImageCreateInfo &createInfo);
   3829 
   3830     bool operator==(const VmaBufferImageUsage& rhs) const { return Value == rhs.Value; }
   3831     bool operator!=(const VmaBufferImageUsage& rhs) const { return Value != rhs.Value; }
   3832 
   3833     bool Contains(BaseType flag) const { return (Value & flag) != 0; }
   3834     bool ContainsDeviceAccess() const
   3835     {
   3836         // This relies on values of VK_IMAGE_USAGE_TRANSFER* being the same as VK_BUFFER_IMAGE_TRANSFER*.
   3837         return (Value & ~BaseType(VK_BUFFER_USAGE_TRANSFER_DST_BIT | VK_BUFFER_USAGE_TRANSFER_SRC_BIT)) != 0;
   3838     }
   3839 };
   3840 
   3841 const VmaBufferImageUsage VmaBufferImageUsage::UNKNOWN = VmaBufferImageUsage(0);
   3842 
   3843 static void swap(VmaBufferImageUsage& lhs, VmaBufferImageUsage& rhs) noexcept
   3844 {
   3845     using std::swap;
   3846     swap(lhs.Value, rhs.Value);
   3847 }
   3848 
   3849 VmaBufferImageUsage::VmaBufferImageUsage(const VkBufferCreateInfo &createInfo,
   3850     bool useKhrMaintenance5)
   3851 {
   3852 #if VMA_KHR_MAINTENANCE5
   3853     if(useKhrMaintenance5)
   3854     {
   3855         // If VkBufferCreateInfo::pNext chain contains VkBufferUsageFlags2CreateInfoKHR,
   3856         // take usage from it and ignore VkBufferCreateInfo::usage, per specification
   3857         // of the VK_KHR_maintenance5 extension.
   3858         const VkBufferUsageFlags2CreateInfoKHR* const usageFlags2 =
   3859             VmaPnextChainFind<VkBufferUsageFlags2CreateInfoKHR>(&createInfo, VK_STRUCTURE_TYPE_BUFFER_USAGE_FLAGS_2_CREATE_INFO_KHR);
   3860         if(usageFlags2)
   3861         {
   3862             this->Value = usageFlags2->usage;
   3863             return;
   3864         }
   3865     }
   3866 #endif
   3867 
   3868     this->Value = (BaseType)createInfo.usage;
   3869 }
   3870 
   3871 VmaBufferImageUsage::VmaBufferImageUsage(const VkImageCreateInfo &createInfo)
   3872 {
   3873     // Maybe in the future there will be VK_KHR_maintenanceN extension with structure
   3874     // VkImageUsageFlags2CreateInfoKHR, like the one for buffers...
   3875 
   3876     this->Value = (BaseType)createInfo.usage;
   3877 }
   3878 
   3879 // This is the main algorithm that guides the selection of a memory type best for an allocation -
   3880 // converts usage to required/preferred/not preferred flags.
   3881 static bool FindMemoryPreferences(
   3882     bool isIntegratedGPU,
   3883     const VmaAllocationCreateInfo& allocCreateInfo,
   3884     VmaBufferImageUsage bufImgUsage,
   3885     VkMemoryPropertyFlags& outRequiredFlags,
   3886     VkMemoryPropertyFlags& outPreferredFlags,
   3887     VkMemoryPropertyFlags& outNotPreferredFlags)
   3888 {
   3889     outRequiredFlags = allocCreateInfo.requiredFlags;
   3890     outPreferredFlags = allocCreateInfo.preferredFlags;
   3891     outNotPreferredFlags = 0;
   3892 
   3893     switch(allocCreateInfo.usage)
   3894     {
   3895     case VMA_MEMORY_USAGE_UNKNOWN:
   3896         break;
   3897     case VMA_MEMORY_USAGE_GPU_ONLY:
   3898         if(!isIntegratedGPU || (outPreferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
   3899         {
   3900             outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
   3901         }
   3902         break;
   3903     case VMA_MEMORY_USAGE_CPU_ONLY:
   3904         outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
   3905         break;
   3906     case VMA_MEMORY_USAGE_CPU_TO_GPU:
   3907         outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
   3908         if(!isIntegratedGPU || (outPreferredFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
   3909         {
   3910             outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
   3911         }
   3912         break;
   3913     case VMA_MEMORY_USAGE_GPU_TO_CPU:
   3914         outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
   3915         outPreferredFlags |= VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
   3916         break;
   3917     case VMA_MEMORY_USAGE_CPU_COPY:
   3918         outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
   3919         break;
   3920     case VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED:
   3921         outRequiredFlags |= VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT;
   3922         break;
   3923     case VMA_MEMORY_USAGE_AUTO:
   3924     case VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE:
   3925     case VMA_MEMORY_USAGE_AUTO_PREFER_HOST:
   3926     {
   3927         if(bufImgUsage == VmaBufferImageUsage::UNKNOWN)
   3928         {
   3929             VMA_ASSERT(0 && "VMA_MEMORY_USAGE_AUTO* values can only be used with functions like vmaCreateBuffer, vmaCreateImage so that the details of the created resource are known."
   3930                 " Maybe you use VkBufferUsageFlags2CreateInfoKHR but forgot to use VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE5_BIT?" );
   3931             return false;
   3932         }
   3933 
   3934         const bool deviceAccess = bufImgUsage.ContainsDeviceAccess();
   3935         const bool hostAccessSequentialWrite = (allocCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT) != 0;
   3936         const bool hostAccessRandom = (allocCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT) != 0;
   3937         const bool hostAccessAllowTransferInstead = (allocCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT) != 0;
   3938         const bool preferDevice = allocCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE;
   3939         const bool preferHost = allocCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_HOST;
   3940 
   3941         // CPU random access - e.g. a buffer written to or transferred from GPU to read back on CPU.
   3942         if(hostAccessRandom)
   3943         {
   3944             // Prefer cached. Cannot require it, because some platforms don't have it (e.g. Raspberry Pi - see #362)!
   3945             outPreferredFlags |= VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
   3946 
   3947             if (!isIntegratedGPU && deviceAccess && hostAccessAllowTransferInstead && !preferHost)
   3948             {
   3949                 // Nice if it will end up in HOST_VISIBLE, but more importantly prefer DEVICE_LOCAL.
   3950                 // Omitting HOST_VISIBLE here is intentional.
   3951                 // In case there is DEVICE_LOCAL | HOST_VISIBLE | HOST_CACHED, it will pick that one.
   3952                 // Otherwise, this will give same weight to DEVICE_LOCAL as HOST_VISIBLE | HOST_CACHED and select the former if occurs first on the list.
   3953                 outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
   3954             }
   3955             else
   3956             {
   3957                 // Always CPU memory.
   3958                 outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
   3959             }
   3960         }
   3961         // CPU sequential write - may be CPU or host-visible GPU memory, uncached and write-combined.
   3962         else if(hostAccessSequentialWrite)
   3963         {
   3964             // Want uncached and write-combined.
   3965             outNotPreferredFlags |= VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
   3966 
   3967             if(!isIntegratedGPU && deviceAccess && hostAccessAllowTransferInstead && !preferHost)
   3968             {
   3969                 outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT | VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
   3970             }
   3971             else
   3972             {
   3973                 outRequiredFlags |= VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
   3974                 // Direct GPU access, CPU sequential write (e.g. a dynamic uniform buffer updated every frame)
   3975                 if(deviceAccess)
   3976                 {
   3977                     // Could go to CPU memory or GPU BAR/unified. Up to the user to decide. If no preference, choose GPU memory.
   3978                     if(preferHost)
   3979                         outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
   3980                     else
   3981                         outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
   3982                 }
   3983                 // GPU no direct access, CPU sequential write (e.g. an upload buffer to be transferred to the GPU)
   3984                 else
   3985                 {
   3986                     // Could go to CPU memory or GPU BAR/unified. Up to the user to decide. If no preference, choose CPU memory.
   3987                     if(preferDevice)
   3988                         outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
   3989                     else
   3990                         outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
   3991                 }
   3992             }
   3993         }
   3994         // No CPU access
   3995         else
   3996         {
   3997             // if(deviceAccess)
   3998             //
   3999             // GPU access, no CPU access (e.g. a color attachment image) - prefer GPU memory,
   4000             // unless there is a clear preference from the user not to do so.
   4001             //
   4002             // else:
   4003             //
   4004             // No direct GPU access, no CPU access, just transfers.
   4005             // It may be staging copy intended for e.g. preserving image for next frame (then better GPU memory) or
   4006             // a "swap file" copy to free some GPU memory (then better CPU memory).
   4007             // Up to the user to decide. If no preferece, assume the former and choose GPU memory.
   4008 
   4009             if(preferHost)
   4010                 outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
   4011             else
   4012                 outPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
   4013         }
   4014         break;
   4015     }
   4016     default:
   4017         VMA_ASSERT(0);
   4018     }
   4019 
   4020     // Avoid DEVICE_COHERENT unless explicitly requested.
   4021     if(((allocCreateInfo.requiredFlags | allocCreateInfo.preferredFlags) &
   4022         (VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY | VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY)) == 0)
   4023     {
   4024         outNotPreferredFlags |= VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY;
   4025     }
   4026 
   4027     return true;
   4028 }
   4029 
   4030 ////////////////////////////////////////////////////////////////////////////////
   4031 // Memory allocation
   4032 
   4033 static void* VmaMalloc(const VkAllocationCallbacks* pAllocationCallbacks, size_t size, size_t alignment)
   4034 {
   4035     void* result = VMA_NULL;
   4036     if ((pAllocationCallbacks != VMA_NULL) &&
   4037         (pAllocationCallbacks->pfnAllocation != VMA_NULL))
   4038     {
   4039         result = (*pAllocationCallbacks->pfnAllocation)(
   4040             pAllocationCallbacks->pUserData,
   4041             size,
   4042             alignment,
   4043             VK_SYSTEM_ALLOCATION_SCOPE_OBJECT);
   4044     }
   4045     else
   4046     {
   4047         result = VMA_SYSTEM_ALIGNED_MALLOC(size, alignment);
   4048     }
   4049     VMA_ASSERT(result != VMA_NULL && "CPU memory allocation failed.");
   4050     return result;
   4051 }
   4052 
   4053 static void VmaFree(const VkAllocationCallbacks* pAllocationCallbacks, void* ptr)
   4054 {
   4055     if ((pAllocationCallbacks != VMA_NULL) &&
   4056         (pAllocationCallbacks->pfnFree != VMA_NULL))
   4057     {
   4058         (*pAllocationCallbacks->pfnFree)(pAllocationCallbacks->pUserData, ptr);
   4059     }
   4060     else
   4061     {
   4062         VMA_SYSTEM_ALIGNED_FREE(ptr);
   4063     }
   4064 }
   4065 
   4066 template<typename T>
   4067 static T* VmaAllocate(const VkAllocationCallbacks* pAllocationCallbacks)
   4068 {
   4069     return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T), VMA_ALIGN_OF(T));
   4070 }
   4071 
   4072 template<typename T>
   4073 static T* VmaAllocateArray(const VkAllocationCallbacks* pAllocationCallbacks, size_t count)
   4074 {
   4075     return (T*)VmaMalloc(pAllocationCallbacks, sizeof(T) * count, VMA_ALIGN_OF(T));
   4076 }
   4077 
   4078 #define vma_new(allocator, type)   new(VmaAllocate<type>(allocator))(type)
   4079 
   4080 #define vma_new_array(allocator, type, count)   new(VmaAllocateArray<type>((allocator), (count)))(type)
   4081 
   4082 template<typename T>
   4083 static void vma_delete(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr)
   4084 {
   4085     ptr->~T();
   4086     VmaFree(pAllocationCallbacks, ptr);
   4087 }
   4088 
   4089 template<typename T>
   4090 static void vma_delete_array(const VkAllocationCallbacks* pAllocationCallbacks, T* ptr, size_t count)
   4091 {
   4092     if (ptr != VMA_NULL)
   4093     {
   4094         for (size_t i = count; i--; )
   4095         {
   4096             ptr[i].~T();
   4097         }
   4098         VmaFree(pAllocationCallbacks, ptr);
   4099     }
   4100 }
   4101 
   4102 static char* VmaCreateStringCopy(const VkAllocationCallbacks* allocs, const char* srcStr)
   4103 {
   4104     if (srcStr != VMA_NULL)
   4105     {
   4106         const size_t len = strlen(srcStr);
   4107         char* const result = vma_new_array(allocs, char, len + 1);
   4108         memcpy(result, srcStr, len + 1);
   4109         return result;
   4110     }
   4111     return VMA_NULL;
   4112 }
   4113 
   4114 #if VMA_STATS_STRING_ENABLED
   4115 static char* VmaCreateStringCopy(const VkAllocationCallbacks* allocs, const char* srcStr, size_t strLen)
   4116 {
   4117     if (srcStr != VMA_NULL)
   4118     {
   4119         char* const result = vma_new_array(allocs, char, strLen + 1);
   4120         memcpy(result, srcStr, strLen);
   4121         result[strLen] = '\0';
   4122         return result;
   4123     }
   4124     return VMA_NULL;
   4125 }
   4126 #endif // VMA_STATS_STRING_ENABLED
   4127 
   4128 static void VmaFreeString(const VkAllocationCallbacks* allocs, char* str)
   4129 {
   4130     if (str != VMA_NULL)
   4131     {
   4132         const size_t len = strlen(str);
   4133         vma_delete_array(allocs, str, len + 1);
   4134     }
   4135 }
   4136 
   4137 template<typename CmpLess, typename VectorT>
   4138 size_t VmaVectorInsertSorted(VectorT& vector, const typename VectorT::value_type& value)
   4139 {
   4140     const size_t indexToInsert = VmaBinaryFindFirstNotLess(
   4141         vector.data(),
   4142         vector.data() + vector.size(),
   4143         value,
   4144         CmpLess()) - vector.data();
   4145     VmaVectorInsert(vector, indexToInsert, value);
   4146     return indexToInsert;
   4147 }
   4148 
   4149 template<typename CmpLess, typename VectorT>
   4150 bool VmaVectorRemoveSorted(VectorT& vector, const typename VectorT::value_type& value)
   4151 {
   4152     CmpLess comparator;
   4153     typename VectorT::iterator it = VmaBinaryFindFirstNotLess(
   4154         vector.begin(),
   4155         vector.end(),
   4156         value,
   4157         comparator);
   4158     if ((it != vector.end()) && !comparator(*it, value) && !comparator(value, *it))
   4159     {
   4160         size_t indexToRemove = it - vector.begin();
   4161         VmaVectorRemove(vector, indexToRemove);
   4162         return true;
   4163     }
   4164     return false;
   4165 }
   4166 #endif // _VMA_FUNCTIONS
   4167 
   4168 #ifndef _VMA_STATISTICS_FUNCTIONS
   4169 
   4170 static void VmaClearStatistics(VmaStatistics& outStats)
   4171 {
   4172     outStats.blockCount = 0;
   4173     outStats.allocationCount = 0;
   4174     outStats.blockBytes = 0;
   4175     outStats.allocationBytes = 0;
   4176 }
   4177 
   4178 static void VmaAddStatistics(VmaStatistics& inoutStats, const VmaStatistics& src)
   4179 {
   4180     inoutStats.blockCount += src.blockCount;
   4181     inoutStats.allocationCount += src.allocationCount;
   4182     inoutStats.blockBytes += src.blockBytes;
   4183     inoutStats.allocationBytes += src.allocationBytes;
   4184 }
   4185 
   4186 static void VmaClearDetailedStatistics(VmaDetailedStatistics& outStats)
   4187 {
   4188     VmaClearStatistics(outStats.statistics);
   4189     outStats.unusedRangeCount = 0;
   4190     outStats.allocationSizeMin = VK_WHOLE_SIZE;
   4191     outStats.allocationSizeMax = 0;
   4192     outStats.unusedRangeSizeMin = VK_WHOLE_SIZE;
   4193     outStats.unusedRangeSizeMax = 0;
   4194 }
   4195 
   4196 static void VmaAddDetailedStatisticsAllocation(VmaDetailedStatistics& inoutStats, VkDeviceSize size)
   4197 {
   4198     inoutStats.statistics.allocationCount++;
   4199     inoutStats.statistics.allocationBytes += size;
   4200     inoutStats.allocationSizeMin = VMA_MIN(inoutStats.allocationSizeMin, size);
   4201     inoutStats.allocationSizeMax = VMA_MAX(inoutStats.allocationSizeMax, size);
   4202 }
   4203 
   4204 static void VmaAddDetailedStatisticsUnusedRange(VmaDetailedStatistics& inoutStats, VkDeviceSize size)
   4205 {
   4206     inoutStats.unusedRangeCount++;
   4207     inoutStats.unusedRangeSizeMin = VMA_MIN(inoutStats.unusedRangeSizeMin, size);
   4208     inoutStats.unusedRangeSizeMax = VMA_MAX(inoutStats.unusedRangeSizeMax, size);
   4209 }
   4210 
   4211 static void VmaAddDetailedStatistics(VmaDetailedStatistics& inoutStats, const VmaDetailedStatistics& src)
   4212 {
   4213     VmaAddStatistics(inoutStats.statistics, src.statistics);
   4214     inoutStats.unusedRangeCount += src.unusedRangeCount;
   4215     inoutStats.allocationSizeMin = VMA_MIN(inoutStats.allocationSizeMin, src.allocationSizeMin);
   4216     inoutStats.allocationSizeMax = VMA_MAX(inoutStats.allocationSizeMax, src.allocationSizeMax);
   4217     inoutStats.unusedRangeSizeMin = VMA_MIN(inoutStats.unusedRangeSizeMin, src.unusedRangeSizeMin);
   4218     inoutStats.unusedRangeSizeMax = VMA_MAX(inoutStats.unusedRangeSizeMax, src.unusedRangeSizeMax);
   4219 }
   4220 
   4221 #endif // _VMA_STATISTICS_FUNCTIONS
   4222 
   4223 #ifndef _VMA_MUTEX_LOCK
   4224 // Helper RAII class to lock a mutex in constructor and unlock it in destructor (at the end of scope).
   4225 struct VmaMutexLock
   4226 {
   4227     VMA_CLASS_NO_COPY_NO_MOVE(VmaMutexLock)
   4228 public:
   4229     VmaMutexLock(VMA_MUTEX& mutex, bool useMutex = true) :
   4230         m_pMutex(useMutex ? &mutex : VMA_NULL)
   4231     {
   4232         if (m_pMutex) { m_pMutex->Lock(); }
   4233     }
   4234     ~VmaMutexLock() {  if (m_pMutex) { m_pMutex->Unlock(); } }
   4235 
   4236 private:
   4237     VMA_MUTEX* m_pMutex;
   4238 };
   4239 
   4240 // Helper RAII class to lock a RW mutex in constructor and unlock it in destructor (at the end of scope), for reading.
   4241 struct VmaMutexLockRead
   4242 {
   4243     VMA_CLASS_NO_COPY_NO_MOVE(VmaMutexLockRead)
   4244 public:
   4245     VmaMutexLockRead(VMA_RW_MUTEX& mutex, bool useMutex) :
   4246         m_pMutex(useMutex ? &mutex : VMA_NULL)
   4247     {
   4248         if (m_pMutex) { m_pMutex->LockRead(); }
   4249     }
   4250     ~VmaMutexLockRead() { if (m_pMutex) { m_pMutex->UnlockRead(); } }
   4251 
   4252 private:
   4253     VMA_RW_MUTEX* m_pMutex;
   4254 };
   4255 
   4256 // Helper RAII class to lock a RW mutex in constructor and unlock it in destructor (at the end of scope), for writing.
   4257 struct VmaMutexLockWrite
   4258 {
   4259     VMA_CLASS_NO_COPY_NO_MOVE(VmaMutexLockWrite)
   4260 public:
   4261     VmaMutexLockWrite(VMA_RW_MUTEX& mutex, bool useMutex)
   4262         : m_pMutex(useMutex ? &mutex : VMA_NULL)
   4263     {
   4264         if (m_pMutex) { m_pMutex->LockWrite(); }
   4265     }
   4266     ~VmaMutexLockWrite() { if (m_pMutex) { m_pMutex->UnlockWrite(); } }
   4267 
   4268 private:
   4269     VMA_RW_MUTEX* m_pMutex;
   4270 };
   4271 
   4272 #if VMA_DEBUG_GLOBAL_MUTEX
   4273     static VMA_MUTEX gDebugGlobalMutex;
   4274     #define VMA_DEBUG_GLOBAL_MUTEX_LOCK VmaMutexLock debugGlobalMutexLock(gDebugGlobalMutex, true);
   4275 #else
   4276     #define VMA_DEBUG_GLOBAL_MUTEX_LOCK
   4277 #endif
   4278 #endif // _VMA_MUTEX_LOCK
   4279 
   4280 #ifndef _VMA_ATOMIC_TRANSACTIONAL_INCREMENT
   4281 // An object that increments given atomic but decrements it back in the destructor unless Commit() is called.
   4282 template<typename AtomicT>
   4283 struct AtomicTransactionalIncrement
   4284 {
   4285 public:
   4286     using T = decltype(AtomicT().load());
   4287 
   4288     ~AtomicTransactionalIncrement()
   4289     {
   4290         if(m_Atomic)
   4291             --(*m_Atomic);
   4292     }
   4293 
   4294     void Commit() { m_Atomic = VMA_NULL; }
   4295     T Increment(AtomicT* atomic)
   4296     {
   4297         m_Atomic = atomic;
   4298         return m_Atomic->fetch_add(1);
   4299     }
   4300 
   4301 private:
   4302     AtomicT* m_Atomic = VMA_NULL;
   4303 };
   4304 #endif // _VMA_ATOMIC_TRANSACTIONAL_INCREMENT
   4305 
   4306 #ifndef _VMA_STL_ALLOCATOR
   4307 // STL-compatible allocator.
   4308 template<typename T>
   4309 struct VmaStlAllocator
   4310 {
   4311     const VkAllocationCallbacks* const m_pCallbacks;
   4312     typedef T value_type;
   4313 
   4314     VmaStlAllocator(const VkAllocationCallbacks* pCallbacks) : m_pCallbacks(pCallbacks) {}
   4315     template<typename U>
   4316     VmaStlAllocator(const VmaStlAllocator<U>& src) : m_pCallbacks(src.m_pCallbacks) {}
   4317     VmaStlAllocator(const VmaStlAllocator&) = default;
   4318     VmaStlAllocator& operator=(const VmaStlAllocator&) = delete;
   4319 
   4320     T* allocate(size_t n) { return VmaAllocateArray<T>(m_pCallbacks, n); }
   4321     void deallocate(T* p, size_t n) { VmaFree(m_pCallbacks, p); }
   4322 
   4323     template<typename U>
   4324     bool operator==(const VmaStlAllocator<U>& rhs) const
   4325     {
   4326         return m_pCallbacks == rhs.m_pCallbacks;
   4327     }
   4328     template<typename U>
   4329     bool operator!=(const VmaStlAllocator<U>& rhs) const
   4330     {
   4331         return m_pCallbacks != rhs.m_pCallbacks;
   4332     }
   4333 };
   4334 #endif // _VMA_STL_ALLOCATOR
   4335 
   4336 #ifndef _VMA_VECTOR
   4337 /* Class with interface compatible with subset of std::vector.
   4338 T must be POD because constructors and destructors are not called and memcpy is
   4339 used for these objects. */
   4340 template<typename T, typename AllocatorT>
   4341 class VmaVector
   4342 {
   4343 public:
   4344     typedef T value_type;
   4345     typedef T* iterator;
   4346     typedef const T* const_iterator;
   4347 
   4348     VmaVector(const AllocatorT& allocator);
   4349     VmaVector(size_t count, const AllocatorT& allocator);
   4350     // This version of the constructor is here for compatibility with pre-C++14 std::vector.
   4351     // value is unused.
   4352     VmaVector(size_t count, const T& value, const AllocatorT& allocator) : VmaVector(count, allocator) {}
   4353     VmaVector(const VmaVector<T, AllocatorT>& src);
   4354     VmaVector& operator=(const VmaVector& rhs);
   4355     ~VmaVector() { VmaFree(m_Allocator.m_pCallbacks, m_pArray); }
   4356 
   4357     bool empty() const { return m_Count == 0; }
   4358     size_t size() const { return m_Count; }
   4359     T* data() { return m_pArray; }
   4360     T& front() { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[0]; }
   4361     T& back() { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[m_Count - 1]; }
   4362     const T* data() const { return m_pArray; }
   4363     const T& front() const { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[0]; }
   4364     const T& back() const { VMA_HEAVY_ASSERT(m_Count > 0); return m_pArray[m_Count - 1]; }
   4365 
   4366     iterator begin() { return m_pArray; }
   4367     iterator end() { return m_pArray + m_Count; }
   4368     const_iterator cbegin() const { return m_pArray; }
   4369     const_iterator cend() const { return m_pArray + m_Count; }
   4370     const_iterator begin() const { return cbegin(); }
   4371     const_iterator end() const { return cend(); }
   4372 
   4373     void pop_front() { VMA_HEAVY_ASSERT(m_Count > 0); remove(0); }
   4374     void pop_back() { VMA_HEAVY_ASSERT(m_Count > 0); resize(size() - 1); }
   4375     void push_front(const T& src) { insert(0, src); }
   4376 
   4377     void push_back(const T& src);
   4378     void reserve(size_t newCapacity, bool freeMemory = false);
   4379     void resize(size_t newCount);
   4380     void clear() { resize(0); }
   4381     void shrink_to_fit();
   4382     void insert(size_t index, const T& src);
   4383     void remove(size_t index);
   4384 
   4385     T& operator[](size_t index) { VMA_HEAVY_ASSERT(index < m_Count); return m_pArray[index]; }
   4386     const T& operator[](size_t index) const { VMA_HEAVY_ASSERT(index < m_Count); return m_pArray[index]; }
   4387 
   4388 private:
   4389     AllocatorT m_Allocator;
   4390     T* m_pArray;
   4391     size_t m_Count;
   4392     size_t m_Capacity;
   4393 };
   4394 
   4395 #ifndef _VMA_VECTOR_FUNCTIONS
   4396 template<typename T, typename AllocatorT>
   4397 VmaVector<T, AllocatorT>::VmaVector(const AllocatorT& allocator)
   4398     : m_Allocator(allocator),
   4399     m_pArray(VMA_NULL),
   4400     m_Count(0),
   4401     m_Capacity(0) {}
   4402 
   4403 template<typename T, typename AllocatorT>
   4404 VmaVector<T, AllocatorT>::VmaVector(size_t count, const AllocatorT& allocator)
   4405     : m_Allocator(allocator),
   4406     m_pArray(count ? (T*)VmaAllocateArray<T>(allocator.m_pCallbacks, count) : VMA_NULL),
   4407     m_Count(count),
   4408     m_Capacity(count) {}
   4409 
   4410 template<typename T, typename AllocatorT>
   4411 VmaVector<T, AllocatorT>::VmaVector(const VmaVector& src)
   4412     : m_Allocator(src.m_Allocator),
   4413     m_pArray(src.m_Count ? (T*)VmaAllocateArray<T>(src.m_Allocator.m_pCallbacks, src.m_Count) : VMA_NULL),
   4414     m_Count(src.m_Count),
   4415     m_Capacity(src.m_Count)
   4416 {
   4417     if (m_Count != 0)
   4418     {
   4419         memcpy(m_pArray, src.m_pArray, m_Count * sizeof(T));
   4420     }
   4421 }
   4422 
   4423 template<typename T, typename AllocatorT>
   4424 VmaVector<T, AllocatorT>& VmaVector<T, AllocatorT>::operator=(const VmaVector& rhs)
   4425 {
   4426     if (&rhs != this)
   4427     {
   4428         resize(rhs.m_Count);
   4429         if (m_Count != 0)
   4430         {
   4431             memcpy(m_pArray, rhs.m_pArray, m_Count * sizeof(T));
   4432         }
   4433     }
   4434     return *this;
   4435 }
   4436 
   4437 template<typename T, typename AllocatorT>
   4438 void VmaVector<T, AllocatorT>::push_back(const T& src)
   4439 {
   4440     const size_t newIndex = size();
   4441     resize(newIndex + 1);
   4442     m_pArray[newIndex] = src;
   4443 }
   4444 
   4445 template<typename T, typename AllocatorT>
   4446 void VmaVector<T, AllocatorT>::reserve(size_t newCapacity, bool freeMemory)
   4447 {
   4448     newCapacity = VMA_MAX(newCapacity, m_Count);
   4449 
   4450     if ((newCapacity < m_Capacity) && !freeMemory)
   4451     {
   4452         newCapacity = m_Capacity;
   4453     }
   4454 
   4455     if (newCapacity != m_Capacity)
   4456     {
   4457         T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator, newCapacity) : VMA_NULL;
   4458         if (m_Count != 0)
   4459         {
   4460             memcpy(newArray, m_pArray, m_Count * sizeof(T));
   4461         }
   4462         VmaFree(m_Allocator.m_pCallbacks, m_pArray);
   4463         m_Capacity = newCapacity;
   4464         m_pArray = newArray;
   4465     }
   4466 }
   4467 
   4468 template<typename T, typename AllocatorT>
   4469 void VmaVector<T, AllocatorT>::resize(size_t newCount)
   4470 {
   4471     size_t newCapacity = m_Capacity;
   4472     if (newCount > m_Capacity)
   4473     {
   4474         newCapacity = VMA_MAX(newCount, VMA_MAX(m_Capacity * 3 / 2, (size_t)8));
   4475     }
   4476 
   4477     if (newCapacity != m_Capacity)
   4478     {
   4479         T* const newArray = newCapacity ? VmaAllocateArray<T>(m_Allocator.m_pCallbacks, newCapacity) : VMA_NULL;
   4480         const size_t elementsToCopy = VMA_MIN(m_Count, newCount);
   4481         if (elementsToCopy != 0)
   4482         {
   4483             memcpy(newArray, m_pArray, elementsToCopy * sizeof(T));
   4484         }
   4485         VmaFree(m_Allocator.m_pCallbacks, m_pArray);
   4486         m_Capacity = newCapacity;
   4487         m_pArray = newArray;
   4488     }
   4489 
   4490     m_Count = newCount;
   4491 }
   4492 
   4493 template<typename T, typename AllocatorT>
   4494 void VmaVector<T, AllocatorT>::shrink_to_fit()
   4495 {
   4496     if (m_Capacity > m_Count)
   4497     {
   4498         T* newArray = VMA_NULL;
   4499         if (m_Count > 0)
   4500         {
   4501             newArray = VmaAllocateArray<T>(m_Allocator.m_pCallbacks, m_Count);
   4502             memcpy(newArray, m_pArray, m_Count * sizeof(T));
   4503         }
   4504         VmaFree(m_Allocator.m_pCallbacks, m_pArray);
   4505         m_Capacity = m_Count;
   4506         m_pArray = newArray;
   4507     }
   4508 }
   4509 
   4510 template<typename T, typename AllocatorT>
   4511 void VmaVector<T, AllocatorT>::insert(size_t index, const T& src)
   4512 {
   4513     VMA_HEAVY_ASSERT(index <= m_Count);
   4514     const size_t oldCount = size();
   4515     resize(oldCount + 1);
   4516     if (index < oldCount)
   4517     {
   4518         memmove(m_pArray + (index + 1), m_pArray + index, (oldCount - index) * sizeof(T));
   4519     }
   4520     m_pArray[index] = src;
   4521 }
   4522 
   4523 template<typename T, typename AllocatorT>
   4524 void VmaVector<T, AllocatorT>::remove(size_t index)
   4525 {
   4526     VMA_HEAVY_ASSERT(index < m_Count);
   4527     const size_t oldCount = size();
   4528     if (index < oldCount - 1)
   4529     {
   4530         memmove(m_pArray + index, m_pArray + (index + 1), (oldCount - index - 1) * sizeof(T));
   4531     }
   4532     resize(oldCount - 1);
   4533 }
   4534 #endif // _VMA_VECTOR_FUNCTIONS
   4535 
   4536 template<typename T, typename allocatorT>
   4537 static void VmaVectorInsert(VmaVector<T, allocatorT>& vec, size_t index, const T& item)
   4538 {
   4539     vec.insert(index, item);
   4540 }
   4541 
   4542 template<typename T, typename allocatorT>
   4543 static void VmaVectorRemove(VmaVector<T, allocatorT>& vec, size_t index)
   4544 {
   4545     vec.remove(index);
   4546 }
   4547 #endif // _VMA_VECTOR
   4548 
   4549 #ifndef _VMA_SMALL_VECTOR
   4550 /*
   4551 This is a vector (a variable-sized array), optimized for the case when the array is small.
   4552 
   4553 It contains some number of elements in-place, which allows it to avoid heap allocation
   4554 when the actual number of elements is below that threshold. This allows normal "small"
   4555 cases to be fast without losing generality for large inputs.
   4556 */
   4557 template<typename T, typename AllocatorT, size_t N>
   4558 class VmaSmallVector
   4559 {
   4560 public:
   4561     typedef T value_type;
   4562     typedef T* iterator;
   4563 
   4564     VmaSmallVector(const AllocatorT& allocator);
   4565     VmaSmallVector(size_t count, const AllocatorT& allocator);
   4566     template<typename SrcT, typename SrcAllocatorT, size_t SrcN>
   4567     VmaSmallVector(const VmaSmallVector<SrcT, SrcAllocatorT, SrcN>&) = delete;
   4568     template<typename SrcT, typename SrcAllocatorT, size_t SrcN>
   4569     VmaSmallVector<T, AllocatorT, N>& operator=(const VmaSmallVector<SrcT, SrcAllocatorT, SrcN>&) = delete;
   4570     ~VmaSmallVector() = default;
   4571 
   4572     bool empty() const { return m_Count == 0; }
   4573     size_t size() const { return m_Count; }
   4574     T* data() { return m_Count > N ? m_DynamicArray.data() : m_StaticArray; }
   4575     T& front() { VMA_HEAVY_ASSERT(m_Count > 0); return data()[0]; }
   4576     T& back() { VMA_HEAVY_ASSERT(m_Count > 0); return data()[m_Count - 1]; }
   4577     const T* data() const { return m_Count > N ? m_DynamicArray.data() : m_StaticArray; }
   4578     const T& front() const { VMA_HEAVY_ASSERT(m_Count > 0); return data()[0]; }
   4579     const T& back() const { VMA_HEAVY_ASSERT(m_Count > 0); return data()[m_Count - 1]; }
   4580 
   4581     iterator begin() { return data(); }
   4582     iterator end() { return data() + m_Count; }
   4583 
   4584     void pop_front() { VMA_HEAVY_ASSERT(m_Count > 0); remove(0); }
   4585     void pop_back() { VMA_HEAVY_ASSERT(m_Count > 0); resize(size() - 1); }
   4586     void push_front(const T& src) { insert(0, src); }
   4587 
   4588     void push_back(const T& src);
   4589     void resize(size_t newCount, bool freeMemory = false);
   4590     void clear(bool freeMemory = false);
   4591     void insert(size_t index, const T& src);
   4592     void remove(size_t index);
   4593 
   4594     T& operator[](size_t index) { VMA_HEAVY_ASSERT(index < m_Count); return data()[index]; }
   4595     const T& operator[](size_t index) const { VMA_HEAVY_ASSERT(index < m_Count); return data()[index]; }
   4596 
   4597 private:
   4598     size_t m_Count;
   4599     T m_StaticArray[N]; // Used when m_Size <= N
   4600     VmaVector<T, AllocatorT> m_DynamicArray; // Used when m_Size > N
   4601 };
   4602 
   4603 #ifndef _VMA_SMALL_VECTOR_FUNCTIONS
   4604 template<typename T, typename AllocatorT, size_t N>
   4605 VmaSmallVector<T, AllocatorT, N>::VmaSmallVector(const AllocatorT& allocator)
   4606     : m_Count(0),
   4607     m_DynamicArray(allocator) {}
   4608 
   4609 template<typename T, typename AllocatorT, size_t N>
   4610 VmaSmallVector<T, AllocatorT, N>::VmaSmallVector(size_t count, const AllocatorT& allocator)
   4611     : m_Count(count),
   4612     m_DynamicArray(count > N ? count : 0, allocator) {}
   4613 
   4614 template<typename T, typename AllocatorT, size_t N>
   4615 void VmaSmallVector<T, AllocatorT, N>::push_back(const T& src)
   4616 {
   4617     const size_t newIndex = size();
   4618     resize(newIndex + 1);
   4619     data()[newIndex] = src;
   4620 }
   4621 
   4622 template<typename T, typename AllocatorT, size_t N>
   4623 void VmaSmallVector<T, AllocatorT, N>::resize(size_t newCount, bool freeMemory)
   4624 {
   4625     if (newCount > N && m_Count > N)
   4626     {
   4627         // Any direction, staying in m_DynamicArray
   4628         m_DynamicArray.resize(newCount);
   4629         if (freeMemory)
   4630         {
   4631             m_DynamicArray.shrink_to_fit();
   4632         }
   4633     }
   4634     else if (newCount > N && m_Count <= N)
   4635     {
   4636         // Growing, moving from m_StaticArray to m_DynamicArray
   4637         m_DynamicArray.resize(newCount);
   4638         if (m_Count > 0)
   4639         {
   4640             memcpy(m_DynamicArray.data(), m_StaticArray, m_Count * sizeof(T));
   4641         }
   4642     }
   4643     else if (newCount <= N && m_Count > N)
   4644     {
   4645         // Shrinking, moving from m_DynamicArray to m_StaticArray
   4646         if (newCount > 0)
   4647         {
   4648             memcpy(m_StaticArray, m_DynamicArray.data(), newCount * sizeof(T));
   4649         }
   4650         m_DynamicArray.resize(0);
   4651         if (freeMemory)
   4652         {
   4653             m_DynamicArray.shrink_to_fit();
   4654         }
   4655     }
   4656     else
   4657     {
   4658         // Any direction, staying in m_StaticArray - nothing to do here
   4659     }
   4660     m_Count = newCount;
   4661 }
   4662 
   4663 template<typename T, typename AllocatorT, size_t N>
   4664 void VmaSmallVector<T, AllocatorT, N>::clear(bool freeMemory)
   4665 {
   4666     m_DynamicArray.clear();
   4667     if (freeMemory)
   4668     {
   4669         m_DynamicArray.shrink_to_fit();
   4670     }
   4671     m_Count = 0;
   4672 }
   4673 
   4674 template<typename T, typename AllocatorT, size_t N>
   4675 void VmaSmallVector<T, AllocatorT, N>::insert(size_t index, const T& src)
   4676 {
   4677     VMA_HEAVY_ASSERT(index <= m_Count);
   4678     const size_t oldCount = size();
   4679     resize(oldCount + 1);
   4680     T* const dataPtr = data();
   4681     if (index < oldCount)
   4682     {
   4683         //  I know, this could be more optimal for case where memmove can be memcpy directly from m_StaticArray to m_DynamicArray.
   4684         memmove(dataPtr + (index + 1), dataPtr + index, (oldCount - index) * sizeof(T));
   4685     }
   4686     dataPtr[index] = src;
   4687 }
   4688 
   4689 template<typename T, typename AllocatorT, size_t N>
   4690 void VmaSmallVector<T, AllocatorT, N>::remove(size_t index)
   4691 {
   4692     VMA_HEAVY_ASSERT(index < m_Count);
   4693     const size_t oldCount = size();
   4694     if (index < oldCount - 1)
   4695     {
   4696         //  I know, this could be more optimal for case where memmove can be memcpy directly from m_DynamicArray to m_StaticArray.
   4697         T* const dataPtr = data();
   4698         memmove(dataPtr + index, dataPtr + (index + 1), (oldCount - index - 1) * sizeof(T));
   4699     }
   4700     resize(oldCount - 1);
   4701 }
   4702 #endif // _VMA_SMALL_VECTOR_FUNCTIONS
   4703 #endif // _VMA_SMALL_VECTOR
   4704 
   4705 #ifndef _VMA_POOL_ALLOCATOR
   4706 /*
   4707 Allocator for objects of type T using a list of arrays (pools) to speed up
   4708 allocation. Number of elements that can be allocated is not bounded because
   4709 allocator can create multiple blocks.
   4710 */
   4711 template<typename T>
   4712 class VmaPoolAllocator
   4713 {
   4714     VMA_CLASS_NO_COPY_NO_MOVE(VmaPoolAllocator)
   4715 public:
   4716     VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, uint32_t firstBlockCapacity);
   4717     ~VmaPoolAllocator();
   4718     template<typename... Types> T* Alloc(Types&&... args);
   4719     void Free(T* ptr);
   4720 
   4721 private:
   4722     union Item
   4723     {
   4724         uint32_t NextFreeIndex;
   4725         alignas(T) char Value[sizeof(T)];
   4726     };
   4727     struct ItemBlock
   4728     {
   4729         Item* pItems;
   4730         uint32_t Capacity;
   4731         uint32_t FirstFreeIndex;
   4732     };
   4733 
   4734     const VkAllocationCallbacks* m_pAllocationCallbacks;
   4735     const uint32_t m_FirstBlockCapacity;
   4736     VmaVector<ItemBlock, VmaStlAllocator<ItemBlock>> m_ItemBlocks;
   4737 
   4738     ItemBlock& CreateNewBlock();
   4739 };
   4740 
   4741 #ifndef _VMA_POOL_ALLOCATOR_FUNCTIONS
   4742 template<typename T>
   4743 VmaPoolAllocator<T>::VmaPoolAllocator(const VkAllocationCallbacks* pAllocationCallbacks, uint32_t firstBlockCapacity)
   4744     : m_pAllocationCallbacks(pAllocationCallbacks),
   4745     m_FirstBlockCapacity(firstBlockCapacity),
   4746     m_ItemBlocks(VmaStlAllocator<ItemBlock>(pAllocationCallbacks))
   4747 {
   4748     VMA_ASSERT(m_FirstBlockCapacity > 1);
   4749 }
   4750 
   4751 template<typename T>
   4752 VmaPoolAllocator<T>::~VmaPoolAllocator()
   4753 {
   4754     for (size_t i = m_ItemBlocks.size(); i--;)
   4755         vma_delete_array(m_pAllocationCallbacks, m_ItemBlocks[i].pItems, m_ItemBlocks[i].Capacity);
   4756     m_ItemBlocks.clear();
   4757 }
   4758 
   4759 template<typename T>
   4760 template<typename... Types> T* VmaPoolAllocator<T>::Alloc(Types&&... args)
   4761 {
   4762     for (size_t i = m_ItemBlocks.size(); i--; )
   4763     {
   4764         ItemBlock& block = m_ItemBlocks[i];
   4765         // This block has some free items: Use first one.
   4766         if (block.FirstFreeIndex != UINT32_MAX)
   4767         {
   4768             Item* const pItem = &block.pItems[block.FirstFreeIndex];
   4769             block.FirstFreeIndex = pItem->NextFreeIndex;
   4770             T* result = (T*)&pItem->Value;
   4771             new(result)T(std::forward<Types>(args)...); // Explicit constructor call.
   4772             return result;
   4773         }
   4774     }
   4775 
   4776     // No block has free item: Create new one and use it.
   4777     ItemBlock& newBlock = CreateNewBlock();
   4778     Item* const pItem = &newBlock.pItems[0];
   4779     newBlock.FirstFreeIndex = pItem->NextFreeIndex;
   4780     T* result = (T*)&pItem->Value;
   4781     new(result) T(std::forward<Types>(args)...); // Explicit constructor call.
   4782     return result;
   4783 }
   4784 
   4785 template<typename T>
   4786 void VmaPoolAllocator<T>::Free(T* ptr)
   4787 {
   4788     // Search all memory blocks to find ptr.
   4789     for (size_t i = m_ItemBlocks.size(); i--; )
   4790     {
   4791         ItemBlock& block = m_ItemBlocks[i];
   4792 
   4793         // Casting to union.
   4794         Item* pItemPtr;
   4795         memcpy(&pItemPtr, &ptr, sizeof(pItemPtr));
   4796 
   4797         // Check if pItemPtr is in address range of this block.
   4798         if ((pItemPtr >= block.pItems) && (pItemPtr < block.pItems + block.Capacity))
   4799         {
   4800             ptr->~T(); // Explicit destructor call.
   4801             const uint32_t index = static_cast<uint32_t>(pItemPtr - block.pItems);
   4802             pItemPtr->NextFreeIndex = block.FirstFreeIndex;
   4803             block.FirstFreeIndex = index;
   4804             return;
   4805         }
   4806     }
   4807     VMA_ASSERT(0 && "Pointer doesn't belong to this memory pool.");
   4808 }
   4809 
   4810 template<typename T>
   4811 typename VmaPoolAllocator<T>::ItemBlock& VmaPoolAllocator<T>::CreateNewBlock()
   4812 {
   4813     const uint32_t newBlockCapacity = m_ItemBlocks.empty() ?
   4814         m_FirstBlockCapacity : m_ItemBlocks.back().Capacity * 3 / 2;
   4815 
   4816     const ItemBlock newBlock =
   4817     {
   4818         vma_new_array(m_pAllocationCallbacks, Item, newBlockCapacity),
   4819         newBlockCapacity,
   4820         0
   4821     };
   4822 
   4823     m_ItemBlocks.push_back(newBlock);
   4824 
   4825     // Setup singly-linked list of all free items in this block.
   4826     for (uint32_t i = 0; i < newBlockCapacity - 1; ++i)
   4827         newBlock.pItems[i].NextFreeIndex = i + 1;
   4828     newBlock.pItems[newBlockCapacity - 1].NextFreeIndex = UINT32_MAX;
   4829     return m_ItemBlocks.back();
   4830 }
   4831 #endif // _VMA_POOL_ALLOCATOR_FUNCTIONS
   4832 #endif // _VMA_POOL_ALLOCATOR
   4833 
   4834 #ifndef _VMA_RAW_LIST
   4835 template<typename T>
   4836 struct VmaListItem
   4837 {
   4838     VmaListItem* pPrev;
   4839     VmaListItem* pNext;
   4840     T Value;
   4841 };
   4842 
   4843 // Doubly linked list.
   4844 template<typename T>
   4845 class VmaRawList
   4846 {
   4847     VMA_CLASS_NO_COPY_NO_MOVE(VmaRawList)
   4848 public:
   4849     typedef VmaListItem<T> ItemType;
   4850 
   4851     VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks);
   4852     // Intentionally not calling Clear, because that would be unnecessary
   4853     // computations to return all items to m_ItemAllocator as free.
   4854     ~VmaRawList() = default;
   4855 
   4856     size_t GetCount() const { return m_Count; }
   4857     bool IsEmpty() const { return m_Count == 0; }
   4858 
   4859     ItemType* Front() { return m_pFront; }
   4860     ItemType* Back() { return m_pBack; }
   4861     const ItemType* Front() const { return m_pFront; }
   4862     const ItemType* Back() const { return m_pBack; }
   4863 
   4864     ItemType* PushFront();
   4865     ItemType* PushBack();
   4866     ItemType* PushFront(const T& value);
   4867     ItemType* PushBack(const T& value);
   4868     void PopFront();
   4869     void PopBack();
   4870 
   4871     // Item can be null - it means PushBack.
   4872     ItemType* InsertBefore(ItemType* pItem);
   4873     // Item can be null - it means PushFront.
   4874     ItemType* InsertAfter(ItemType* pItem);
   4875     ItemType* InsertBefore(ItemType* pItem, const T& value);
   4876     ItemType* InsertAfter(ItemType* pItem, const T& value);
   4877 
   4878     void Clear();
   4879     void Remove(ItemType* pItem);
   4880 
   4881 private:
   4882     const VkAllocationCallbacks* const m_pAllocationCallbacks;
   4883     VmaPoolAllocator<ItemType> m_ItemAllocator;
   4884     ItemType* m_pFront;
   4885     ItemType* m_pBack;
   4886     size_t m_Count;
   4887 };
   4888 
   4889 #ifndef _VMA_RAW_LIST_FUNCTIONS
   4890 template<typename T>
   4891 VmaRawList<T>::VmaRawList(const VkAllocationCallbacks* pAllocationCallbacks)
   4892     : m_pAllocationCallbacks(pAllocationCallbacks),
   4893     m_ItemAllocator(pAllocationCallbacks, 128),
   4894     m_pFront(VMA_NULL),
   4895     m_pBack(VMA_NULL),
   4896     m_Count(0) {}
   4897 
   4898 template<typename T>
   4899 VmaListItem<T>* VmaRawList<T>::PushFront()
   4900 {
   4901     ItemType* const pNewItem = m_ItemAllocator.Alloc();
   4902     pNewItem->pPrev = VMA_NULL;
   4903     if (IsEmpty())
   4904     {
   4905         pNewItem->pNext = VMA_NULL;
   4906         m_pFront = pNewItem;
   4907         m_pBack = pNewItem;
   4908         m_Count = 1;
   4909     }
   4910     else
   4911     {
   4912         pNewItem->pNext = m_pFront;
   4913         m_pFront->pPrev = pNewItem;
   4914         m_pFront = pNewItem;
   4915         ++m_Count;
   4916     }
   4917     return pNewItem;
   4918 }
   4919 
   4920 template<typename T>
   4921 VmaListItem<T>* VmaRawList<T>::PushBack()
   4922 {
   4923     ItemType* const pNewItem = m_ItemAllocator.Alloc();
   4924     pNewItem->pNext = VMA_NULL;
   4925     if(IsEmpty())
   4926     {
   4927         pNewItem->pPrev = VMA_NULL;
   4928         m_pFront = pNewItem;
   4929         m_pBack = pNewItem;
   4930         m_Count = 1;
   4931     }
   4932     else
   4933     {
   4934         pNewItem->pPrev = m_pBack;
   4935         m_pBack->pNext = pNewItem;
   4936         m_pBack = pNewItem;
   4937         ++m_Count;
   4938     }
   4939     return pNewItem;
   4940 }
   4941 
   4942 template<typename T>
   4943 VmaListItem<T>* VmaRawList<T>::PushFront(const T& value)
   4944 {
   4945     ItemType* const pNewItem = PushFront();
   4946     pNewItem->Value = value;
   4947     return pNewItem;
   4948 }
   4949 
   4950 template<typename T>
   4951 VmaListItem<T>* VmaRawList<T>::PushBack(const T& value)
   4952 {
   4953     ItemType* const pNewItem = PushBack();
   4954     pNewItem->Value = value;
   4955     return pNewItem;
   4956 }
   4957 
   4958 template<typename T>
   4959 void VmaRawList<T>::PopFront()
   4960 {
   4961     VMA_HEAVY_ASSERT(m_Count > 0);
   4962     ItemType* const pFrontItem = m_pFront;
   4963     ItemType* const pNextItem = pFrontItem->pNext;
   4964     if (pNextItem != VMA_NULL)
   4965     {
   4966         pNextItem->pPrev = VMA_NULL;
   4967     }
   4968     m_pFront = pNextItem;
   4969     m_ItemAllocator.Free(pFrontItem);
   4970     --m_Count;
   4971 }
   4972 
   4973 template<typename T>
   4974 void VmaRawList<T>::PopBack()
   4975 {
   4976     VMA_HEAVY_ASSERT(m_Count > 0);
   4977     ItemType* const pBackItem = m_pBack;
   4978     ItemType* const pPrevItem = pBackItem->pPrev;
   4979     if(pPrevItem != VMA_NULL)
   4980     {
   4981         pPrevItem->pNext = VMA_NULL;
   4982     }
   4983     m_pBack = pPrevItem;
   4984     m_ItemAllocator.Free(pBackItem);
   4985     --m_Count;
   4986 }
   4987 
   4988 template<typename T>
   4989 void VmaRawList<T>::Clear()
   4990 {
   4991     if (IsEmpty() == false)
   4992     {
   4993         ItemType* pItem = m_pBack;
   4994         while (pItem != VMA_NULL)
   4995         {
   4996             ItemType* const pPrevItem = pItem->pPrev;
   4997             m_ItemAllocator.Free(pItem);
   4998             pItem = pPrevItem;
   4999         }
   5000         m_pFront = VMA_NULL;
   5001         m_pBack = VMA_NULL;
   5002         m_Count = 0;
   5003     }
   5004 }
   5005 
   5006 template<typename T>
   5007 void VmaRawList<T>::Remove(ItemType* pItem)
   5008 {
   5009     VMA_HEAVY_ASSERT(pItem != VMA_NULL);
   5010     VMA_HEAVY_ASSERT(m_Count > 0);
   5011 
   5012     if(pItem->pPrev != VMA_NULL)
   5013     {
   5014         pItem->pPrev->pNext = pItem->pNext;
   5015     }
   5016     else
   5017     {
   5018         VMA_HEAVY_ASSERT(m_pFront == pItem);
   5019         m_pFront = pItem->pNext;
   5020     }
   5021 
   5022     if(pItem->pNext != VMA_NULL)
   5023     {
   5024         pItem->pNext->pPrev = pItem->pPrev;
   5025     }
   5026     else
   5027     {
   5028         VMA_HEAVY_ASSERT(m_pBack == pItem);
   5029         m_pBack = pItem->pPrev;
   5030     }
   5031 
   5032     m_ItemAllocator.Free(pItem);
   5033     --m_Count;
   5034 }
   5035 
   5036 template<typename T>
   5037 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem)
   5038 {
   5039     if(pItem != VMA_NULL)
   5040     {
   5041         ItemType* const prevItem = pItem->pPrev;
   5042         ItemType* const newItem = m_ItemAllocator.Alloc();
   5043         newItem->pPrev = prevItem;
   5044         newItem->pNext = pItem;
   5045         pItem->pPrev = newItem;
   5046         if(prevItem != VMA_NULL)
   5047         {
   5048             prevItem->pNext = newItem;
   5049         }
   5050         else
   5051         {
   5052             VMA_HEAVY_ASSERT(m_pFront == pItem);
   5053             m_pFront = newItem;
   5054         }
   5055         ++m_Count;
   5056         return newItem;
   5057     }
   5058     else
   5059         return PushBack();
   5060 }
   5061 
   5062 template<typename T>
   5063 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem)
   5064 {
   5065     if(pItem != VMA_NULL)
   5066     {
   5067         ItemType* const nextItem = pItem->pNext;
   5068         ItemType* const newItem = m_ItemAllocator.Alloc();
   5069         newItem->pNext = nextItem;
   5070         newItem->pPrev = pItem;
   5071         pItem->pNext = newItem;
   5072         if(nextItem != VMA_NULL)
   5073         {
   5074             nextItem->pPrev = newItem;
   5075         }
   5076         else
   5077         {
   5078             VMA_HEAVY_ASSERT(m_pBack == pItem);
   5079             m_pBack = newItem;
   5080         }
   5081         ++m_Count;
   5082         return newItem;
   5083     }
   5084     else
   5085         return PushFront();
   5086 }
   5087 
   5088 template<typename T>
   5089 VmaListItem<T>* VmaRawList<T>::InsertBefore(ItemType* pItem, const T& value)
   5090 {
   5091     ItemType* const newItem = InsertBefore(pItem);
   5092     newItem->Value = value;
   5093     return newItem;
   5094 }
   5095 
   5096 template<typename T>
   5097 VmaListItem<T>* VmaRawList<T>::InsertAfter(ItemType* pItem, const T& value)
   5098 {
   5099     ItemType* const newItem = InsertAfter(pItem);
   5100     newItem->Value = value;
   5101     return newItem;
   5102 }
   5103 #endif // _VMA_RAW_LIST_FUNCTIONS
   5104 #endif // _VMA_RAW_LIST
   5105 
   5106 #ifndef _VMA_LIST
   5107 template<typename T, typename AllocatorT>
   5108 class VmaList
   5109 {
   5110     VMA_CLASS_NO_COPY_NO_MOVE(VmaList)
   5111 public:
   5112     class reverse_iterator;
   5113     class const_iterator;
   5114     class const_reverse_iterator;
   5115 
   5116     class iterator
   5117     {
   5118         friend class const_iterator;
   5119         friend class VmaList<T, AllocatorT>;
   5120     public:
   5121         iterator() :  m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
   5122         iterator(const reverse_iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
   5123 
   5124         T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
   5125         T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
   5126 
   5127         bool operator==(const iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
   5128         bool operator!=(const iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
   5129 
   5130         iterator operator++(int) { iterator result = *this; ++*this; return result; }
   5131         iterator operator--(int) { iterator result = *this; --*this; return result; }
   5132 
   5133         iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pNext; return *this; }
   5134         iterator& operator--();
   5135 
   5136     private:
   5137         VmaRawList<T>* m_pList;
   5138         VmaListItem<T>* m_pItem;
   5139 
   5140         iterator(VmaRawList<T>* pList, VmaListItem<T>* pItem) : m_pList(pList),  m_pItem(pItem) {}
   5141     };
   5142     class reverse_iterator
   5143     {
   5144         friend class const_reverse_iterator;
   5145         friend class VmaList<T, AllocatorT>;
   5146     public:
   5147         reverse_iterator() : m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
   5148         reverse_iterator(const iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
   5149 
   5150         T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
   5151         T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
   5152 
   5153         bool operator==(const reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
   5154         bool operator!=(const reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
   5155 
   5156         reverse_iterator operator++(int) { reverse_iterator result = *this; ++* this; return result; }
   5157         reverse_iterator operator--(int) { reverse_iterator result = *this; --* this; return result; }
   5158 
   5159         reverse_iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pPrev; return *this; }
   5160         reverse_iterator& operator--();
   5161 
   5162     private:
   5163         VmaRawList<T>* m_pList;
   5164         VmaListItem<T>* m_pItem;
   5165 
   5166         reverse_iterator(VmaRawList<T>* pList, VmaListItem<T>* pItem) : m_pList(pList),  m_pItem(pItem) {}
   5167     };
   5168     class const_iterator
   5169     {
   5170         friend class VmaList<T, AllocatorT>;
   5171     public:
   5172         const_iterator() : m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
   5173         const_iterator(const iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
   5174         const_iterator(const reverse_iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
   5175 
   5176         iterator drop_const() { return { const_cast<VmaRawList<T>*>(m_pList), const_cast<VmaListItem<T>*>(m_pItem) }; }
   5177 
   5178         const T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
   5179         const T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
   5180 
   5181         bool operator==(const const_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
   5182         bool operator!=(const const_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
   5183 
   5184         const_iterator operator++(int) { const_iterator result = *this; ++* this; return result; }
   5185         const_iterator operator--(int) { const_iterator result = *this; --* this; return result; }
   5186 
   5187         const_iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pNext; return *this; }
   5188         const_iterator& operator--();
   5189 
   5190     private:
   5191         const VmaRawList<T>* m_pList;
   5192         const VmaListItem<T>* m_pItem;
   5193 
   5194         const_iterator(const VmaRawList<T>* pList, const VmaListItem<T>* pItem) : m_pList(pList), m_pItem(pItem) {}
   5195     };
   5196     class const_reverse_iterator
   5197     {
   5198         friend class VmaList<T, AllocatorT>;
   5199     public:
   5200         const_reverse_iterator() : m_pList(VMA_NULL), m_pItem(VMA_NULL) {}
   5201         const_reverse_iterator(const reverse_iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
   5202         const_reverse_iterator(const iterator& src) : m_pList(src.m_pList), m_pItem(src.m_pItem) {}
   5203 
   5204         reverse_iterator drop_const() { return { const_cast<VmaRawList<T>*>(m_pList), const_cast<VmaListItem<T>*>(m_pItem) }; }
   5205 
   5206         const T& operator*() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return m_pItem->Value; }
   5207         const T* operator->() const { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); return &m_pItem->Value; }
   5208 
   5209         bool operator==(const const_reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem == rhs.m_pItem; }
   5210         bool operator!=(const const_reverse_iterator& rhs) const { VMA_HEAVY_ASSERT(m_pList == rhs.m_pList); return m_pItem != rhs.m_pItem; }
   5211 
   5212         const_reverse_iterator operator++(int) { const_reverse_iterator result = *this; ++* this; return result; }
   5213         const_reverse_iterator operator--(int) { const_reverse_iterator result = *this; --* this; return result; }
   5214 
   5215         const_reverse_iterator& operator++() { VMA_HEAVY_ASSERT(m_pItem != VMA_NULL); m_pItem = m_pItem->pPrev; return *this; }
   5216         const_reverse_iterator& operator--();
   5217 
   5218     private:
   5219         const VmaRawList<T>* m_pList;
   5220         const VmaListItem<T>* m_pItem;
   5221 
   5222         const_reverse_iterator(const VmaRawList<T>* pList, const VmaListItem<T>* pItem) : m_pList(pList), m_pItem(pItem) {}
   5223     };
   5224 
   5225     VmaList(const AllocatorT& allocator) : m_RawList(allocator.m_pCallbacks) {}
   5226 
   5227     bool empty() const { return m_RawList.IsEmpty(); }
   5228     size_t size() const { return m_RawList.GetCount(); }
   5229 
   5230     iterator begin() { return iterator(&m_RawList, m_RawList.Front()); }
   5231     iterator end() { return iterator(&m_RawList, VMA_NULL); }
   5232 
   5233     const_iterator cbegin() const { return const_iterator(&m_RawList, m_RawList.Front()); }
   5234     const_iterator cend() const { return const_iterator(&m_RawList, VMA_NULL); }
   5235 
   5236     const_iterator begin() const { return cbegin(); }
   5237     const_iterator end() const { return cend(); }
   5238 
   5239     reverse_iterator rbegin() { return reverse_iterator(&m_RawList, m_RawList.Back()); }
   5240     reverse_iterator rend() { return reverse_iterator(&m_RawList, VMA_NULL); }
   5241 
   5242     const_reverse_iterator crbegin() const { return const_reverse_iterator(&m_RawList, m_RawList.Back()); }
   5243     const_reverse_iterator crend() const { return const_reverse_iterator(&m_RawList, VMA_NULL); }
   5244 
   5245     const_reverse_iterator rbegin() const { return crbegin(); }
   5246     const_reverse_iterator rend() const { return crend(); }
   5247 
   5248     void push_back(const T& value) { m_RawList.PushBack(value); }
   5249     iterator insert(iterator it, const T& value) { return iterator(&m_RawList, m_RawList.InsertBefore(it.m_pItem, value)); }
   5250 
   5251     void clear() { m_RawList.Clear(); }
   5252     void erase(iterator it) { m_RawList.Remove(it.m_pItem); }
   5253 
   5254 private:
   5255     VmaRawList<T> m_RawList;
   5256 };
   5257 
   5258 #ifndef _VMA_LIST_FUNCTIONS
   5259 template<typename T, typename AllocatorT>
   5260 typename VmaList<T, AllocatorT>::iterator& VmaList<T, AllocatorT>::iterator::operator--()
   5261 {
   5262     if (m_pItem != VMA_NULL)
   5263     {
   5264         m_pItem = m_pItem->pPrev;
   5265     }
   5266     else
   5267     {
   5268         VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
   5269         m_pItem = m_pList->Back();
   5270     }
   5271     return *this;
   5272 }
   5273 
   5274 template<typename T, typename AllocatorT>
   5275 typename VmaList<T, AllocatorT>::reverse_iterator& VmaList<T, AllocatorT>::reverse_iterator::operator--()
   5276 {
   5277     if (m_pItem != VMA_NULL)
   5278     {
   5279         m_pItem = m_pItem->pNext;
   5280     }
   5281     else
   5282     {
   5283         VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
   5284         m_pItem = m_pList->Front();
   5285     }
   5286     return *this;
   5287 }
   5288 
   5289 template<typename T, typename AllocatorT>
   5290 typename VmaList<T, AllocatorT>::const_iterator& VmaList<T, AllocatorT>::const_iterator::operator--()
   5291 {
   5292     if (m_pItem != VMA_NULL)
   5293     {
   5294         m_pItem = m_pItem->pPrev;
   5295     }
   5296     else
   5297     {
   5298         VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
   5299         m_pItem = m_pList->Back();
   5300     }
   5301     return *this;
   5302 }
   5303 
   5304 template<typename T, typename AllocatorT>
   5305 typename VmaList<T, AllocatorT>::const_reverse_iterator& VmaList<T, AllocatorT>::const_reverse_iterator::operator--()
   5306 {
   5307     if (m_pItem != VMA_NULL)
   5308     {
   5309         m_pItem = m_pItem->pNext;
   5310     }
   5311     else
   5312     {
   5313         VMA_HEAVY_ASSERT(!m_pList->IsEmpty());
   5314         m_pItem = m_pList->Back();
   5315     }
   5316     return *this;
   5317 }
   5318 #endif // _VMA_LIST_FUNCTIONS
   5319 #endif // _VMA_LIST
   5320 
   5321 #ifndef _VMA_INTRUSIVE_LINKED_LIST
   5322 /*
   5323 Expected interface of ItemTypeTraits:
   5324 struct MyItemTypeTraits
   5325 {
   5326     typedef MyItem ItemType;
   5327     static ItemType* GetPrev(const ItemType* item) { return item->myPrevPtr; }
   5328     static ItemType* GetNext(const ItemType* item) { return item->myNextPtr; }
   5329     static ItemType*& AccessPrev(ItemType* item) { return item->myPrevPtr; }
   5330     static ItemType*& AccessNext(ItemType* item) { return item->myNextPtr; }
   5331 };
   5332 */
   5333 template<typename ItemTypeTraits>
   5334 class VmaIntrusiveLinkedList
   5335 {
   5336 public:
   5337     typedef typename ItemTypeTraits::ItemType ItemType;
   5338     static ItemType* GetPrev(const ItemType* item) { return ItemTypeTraits::GetPrev(item); }
   5339     static ItemType* GetNext(const ItemType* item) { return ItemTypeTraits::GetNext(item); }
   5340 
   5341     // Movable, not copyable.
   5342     VmaIntrusiveLinkedList() = default;
   5343     VmaIntrusiveLinkedList(VmaIntrusiveLinkedList && src);
   5344     VmaIntrusiveLinkedList(const VmaIntrusiveLinkedList&) = delete;
   5345     VmaIntrusiveLinkedList& operator=(VmaIntrusiveLinkedList&& src);
   5346     VmaIntrusiveLinkedList& operator=(const VmaIntrusiveLinkedList&) = delete;
   5347     ~VmaIntrusiveLinkedList() { VMA_HEAVY_ASSERT(IsEmpty()); }
   5348 
   5349     size_t GetCount() const { return m_Count; }
   5350     bool IsEmpty() const { return m_Count == 0; }
   5351     ItemType* Front() { return m_Front; }
   5352     ItemType* Back() { return m_Back; }
   5353     const ItemType* Front() const { return m_Front; }
   5354     const ItemType* Back() const { return m_Back; }
   5355 
   5356     void PushBack(ItemType* item);
   5357     void PushFront(ItemType* item);
   5358     ItemType* PopBack();
   5359     ItemType* PopFront();
   5360 
   5361     // MyItem can be null - it means PushBack.
   5362     void InsertBefore(ItemType* existingItem, ItemType* newItem);
   5363     // MyItem can be null - it means PushFront.
   5364     void InsertAfter(ItemType* existingItem, ItemType* newItem);
   5365     void Remove(ItemType* item);
   5366     void RemoveAll();
   5367 
   5368 private:
   5369     ItemType* m_Front = VMA_NULL;
   5370     ItemType* m_Back = VMA_NULL;
   5371     size_t m_Count = 0;
   5372 };
   5373 
   5374 #ifndef _VMA_INTRUSIVE_LINKED_LIST_FUNCTIONS
   5375 template<typename ItemTypeTraits>
   5376 VmaIntrusiveLinkedList<ItemTypeTraits>::VmaIntrusiveLinkedList(VmaIntrusiveLinkedList&& src)
   5377     : m_Front(src.m_Front), m_Back(src.m_Back), m_Count(src.m_Count)
   5378 {
   5379     src.m_Front = src.m_Back = VMA_NULL;
   5380     src.m_Count = 0;
   5381 }
   5382 
   5383 template<typename ItemTypeTraits>
   5384 VmaIntrusiveLinkedList<ItemTypeTraits>& VmaIntrusiveLinkedList<ItemTypeTraits>::operator=(VmaIntrusiveLinkedList&& src)
   5385 {
   5386     if (&src != this)
   5387     {
   5388         VMA_HEAVY_ASSERT(IsEmpty());
   5389         m_Front = src.m_Front;
   5390         m_Back = src.m_Back;
   5391         m_Count = src.m_Count;
   5392         src.m_Front = src.m_Back = VMA_NULL;
   5393         src.m_Count = 0;
   5394     }
   5395     return *this;
   5396 }
   5397 
   5398 template<typename ItemTypeTraits>
   5399 void VmaIntrusiveLinkedList<ItemTypeTraits>::PushBack(ItemType* item)
   5400 {
   5401     VMA_HEAVY_ASSERT(ItemTypeTraits::GetPrev(item) == VMA_NULL && ItemTypeTraits::GetNext(item) == VMA_NULL);
   5402     if (IsEmpty())
   5403     {
   5404         m_Front = item;
   5405         m_Back = item;
   5406         m_Count = 1;
   5407     }
   5408     else
   5409     {
   5410         ItemTypeTraits::AccessPrev(item) = m_Back;
   5411         ItemTypeTraits::AccessNext(m_Back) = item;
   5412         m_Back = item;
   5413         ++m_Count;
   5414     }
   5415 }
   5416 
   5417 template<typename ItemTypeTraits>
   5418 void VmaIntrusiveLinkedList<ItemTypeTraits>::PushFront(ItemType* item)
   5419 {
   5420     VMA_HEAVY_ASSERT(ItemTypeTraits::GetPrev(item) == VMA_NULL && ItemTypeTraits::GetNext(item) == VMA_NULL);
   5421     if (IsEmpty())
   5422     {
   5423         m_Front = item;
   5424         m_Back = item;
   5425         m_Count = 1;
   5426     }
   5427     else
   5428     {
   5429         ItemTypeTraits::AccessNext(item) = m_Front;
   5430         ItemTypeTraits::AccessPrev(m_Front) = item;
   5431         m_Front = item;
   5432         ++m_Count;
   5433     }
   5434 }
   5435 
   5436 template<typename ItemTypeTraits>
   5437 typename VmaIntrusiveLinkedList<ItemTypeTraits>::ItemType* VmaIntrusiveLinkedList<ItemTypeTraits>::PopBack()
   5438 {
   5439     VMA_HEAVY_ASSERT(m_Count > 0);
   5440     ItemType* const backItem = m_Back;
   5441     ItemType* const prevItem = ItemTypeTraits::GetPrev(backItem);
   5442     if (prevItem != VMA_NULL)
   5443     {
   5444         ItemTypeTraits::AccessNext(prevItem) = VMA_NULL;
   5445     }
   5446     m_Back = prevItem;
   5447     --m_Count;
   5448     ItemTypeTraits::AccessPrev(backItem) = VMA_NULL;
   5449     ItemTypeTraits::AccessNext(backItem) = VMA_NULL;
   5450     return backItem;
   5451 }
   5452 
   5453 template<typename ItemTypeTraits>
   5454 typename VmaIntrusiveLinkedList<ItemTypeTraits>::ItemType* VmaIntrusiveLinkedList<ItemTypeTraits>::PopFront()
   5455 {
   5456     VMA_HEAVY_ASSERT(m_Count > 0);
   5457     ItemType* const frontItem = m_Front;
   5458     ItemType* const nextItem = ItemTypeTraits::GetNext(frontItem);
   5459     if (nextItem != VMA_NULL)
   5460     {
   5461         ItemTypeTraits::AccessPrev(nextItem) = VMA_NULL;
   5462     }
   5463     m_Front = nextItem;
   5464     --m_Count;
   5465     ItemTypeTraits::AccessPrev(frontItem) = VMA_NULL;
   5466     ItemTypeTraits::AccessNext(frontItem) = VMA_NULL;
   5467     return frontItem;
   5468 }
   5469 
   5470 template<typename ItemTypeTraits>
   5471 void VmaIntrusiveLinkedList<ItemTypeTraits>::InsertBefore(ItemType* existingItem, ItemType* newItem)
   5472 {
   5473     VMA_HEAVY_ASSERT(newItem != VMA_NULL && ItemTypeTraits::GetPrev(newItem) == VMA_NULL && ItemTypeTraits::GetNext(newItem) == VMA_NULL);
   5474     if (existingItem != VMA_NULL)
   5475     {
   5476         ItemType* const prevItem = ItemTypeTraits::GetPrev(existingItem);
   5477         ItemTypeTraits::AccessPrev(newItem) = prevItem;
   5478         ItemTypeTraits::AccessNext(newItem) = existingItem;
   5479         ItemTypeTraits::AccessPrev(existingItem) = newItem;
   5480         if (prevItem != VMA_NULL)
   5481         {
   5482             ItemTypeTraits::AccessNext(prevItem) = newItem;
   5483         }
   5484         else
   5485         {
   5486             VMA_HEAVY_ASSERT(m_Front == existingItem);
   5487             m_Front = newItem;
   5488         }
   5489         ++m_Count;
   5490     }
   5491     else
   5492         PushBack(newItem);
   5493 }
   5494 
   5495 template<typename ItemTypeTraits>
   5496 void VmaIntrusiveLinkedList<ItemTypeTraits>::InsertAfter(ItemType* existingItem, ItemType* newItem)
   5497 {
   5498     VMA_HEAVY_ASSERT(newItem != VMA_NULL && ItemTypeTraits::GetPrev(newItem) == VMA_NULL && ItemTypeTraits::GetNext(newItem) == VMA_NULL);
   5499     if (existingItem != VMA_NULL)
   5500     {
   5501         ItemType* const nextItem = ItemTypeTraits::GetNext(existingItem);
   5502         ItemTypeTraits::AccessNext(newItem) = nextItem;
   5503         ItemTypeTraits::AccessPrev(newItem) = existingItem;
   5504         ItemTypeTraits::AccessNext(existingItem) = newItem;
   5505         if (nextItem != VMA_NULL)
   5506         {
   5507             ItemTypeTraits::AccessPrev(nextItem) = newItem;
   5508         }
   5509         else
   5510         {
   5511             VMA_HEAVY_ASSERT(m_Back == existingItem);
   5512             m_Back = newItem;
   5513         }
   5514         ++m_Count;
   5515     }
   5516     else
   5517         return PushFront(newItem);
   5518 }
   5519 
   5520 template<typename ItemTypeTraits>
   5521 void VmaIntrusiveLinkedList<ItemTypeTraits>::Remove(ItemType* item)
   5522 {
   5523     VMA_HEAVY_ASSERT(item != VMA_NULL && m_Count > 0);
   5524     if (ItemTypeTraits::GetPrev(item) != VMA_NULL)
   5525     {
   5526         ItemTypeTraits::AccessNext(ItemTypeTraits::AccessPrev(item)) = ItemTypeTraits::GetNext(item);
   5527     }
   5528     else
   5529     {
   5530         VMA_HEAVY_ASSERT(m_Front == item);
   5531         m_Front = ItemTypeTraits::GetNext(item);
   5532     }
   5533 
   5534     if (ItemTypeTraits::GetNext(item) != VMA_NULL)
   5535     {
   5536         ItemTypeTraits::AccessPrev(ItemTypeTraits::AccessNext(item)) = ItemTypeTraits::GetPrev(item);
   5537     }
   5538     else
   5539     {
   5540         VMA_HEAVY_ASSERT(m_Back == item);
   5541         m_Back = ItemTypeTraits::GetPrev(item);
   5542     }
   5543     ItemTypeTraits::AccessPrev(item) = VMA_NULL;
   5544     ItemTypeTraits::AccessNext(item) = VMA_NULL;
   5545     --m_Count;
   5546 }
   5547 
   5548 template<typename ItemTypeTraits>
   5549 void VmaIntrusiveLinkedList<ItemTypeTraits>::RemoveAll()
   5550 {
   5551     if (!IsEmpty())
   5552     {
   5553         ItemType* item = m_Back;
   5554         while (item != VMA_NULL)
   5555         {
   5556             ItemType* const prevItem = ItemTypeTraits::AccessPrev(item);
   5557             ItemTypeTraits::AccessPrev(item) = VMA_NULL;
   5558             ItemTypeTraits::AccessNext(item) = VMA_NULL;
   5559             item = prevItem;
   5560         }
   5561         m_Front = VMA_NULL;
   5562         m_Back = VMA_NULL;
   5563         m_Count = 0;
   5564     }
   5565 }
   5566 #endif // _VMA_INTRUSIVE_LINKED_LIST_FUNCTIONS
   5567 #endif // _VMA_INTRUSIVE_LINKED_LIST
   5568 
   5569 #if !defined(_VMA_STRING_BUILDER) && VMA_STATS_STRING_ENABLED
   5570 class VmaStringBuilder
   5571 {
   5572 public:
   5573     VmaStringBuilder(const VkAllocationCallbacks* allocationCallbacks) : m_Data(VmaStlAllocator<char>(allocationCallbacks)) {}
   5574     ~VmaStringBuilder() = default;
   5575 
   5576     size_t GetLength() const { return m_Data.size(); }
   5577     const char* GetData() const { return m_Data.data(); }
   5578     void AddNewLine() { Add('\n'); }
   5579     void Add(char ch) { m_Data.push_back(ch); }
   5580 
   5581     void Add(const char* pStr);
   5582     void AddNumber(uint32_t num);
   5583     void AddNumber(uint64_t num);
   5584     void AddPointer(const void* ptr);
   5585 
   5586 private:
   5587     VmaVector<char, VmaStlAllocator<char>> m_Data;
   5588 };
   5589 
   5590 #ifndef _VMA_STRING_BUILDER_FUNCTIONS
   5591 void VmaStringBuilder::Add(const char* pStr)
   5592 {
   5593     const size_t strLen = strlen(pStr);
   5594     if (strLen > 0)
   5595     {
   5596         const size_t oldCount = m_Data.size();
   5597         m_Data.resize(oldCount + strLen);
   5598         memcpy(m_Data.data() + oldCount, pStr, strLen);
   5599     }
   5600 }
   5601 
   5602 void VmaStringBuilder::AddNumber(uint32_t num)
   5603 {
   5604     char buf[11];
   5605     buf[10] = '\0';
   5606     char* p = &buf[10];
   5607     do
   5608     {
   5609         *--p = '0' + (char)(num % 10);
   5610         num /= 10;
   5611     } while (num);
   5612     Add(p);
   5613 }
   5614 
   5615 void VmaStringBuilder::AddNumber(uint64_t num)
   5616 {
   5617     char buf[21];
   5618     buf[20] = '\0';
   5619     char* p = &buf[20];
   5620     do
   5621     {
   5622         *--p = '0' + (char)(num % 10);
   5623         num /= 10;
   5624     } while (num);
   5625     Add(p);
   5626 }
   5627 
   5628 void VmaStringBuilder::AddPointer(const void* ptr)
   5629 {
   5630     char buf[21];
   5631     VmaPtrToStr(buf, sizeof(buf), ptr);
   5632     Add(buf);
   5633 }
   5634 #endif //_VMA_STRING_BUILDER_FUNCTIONS
   5635 #endif // _VMA_STRING_BUILDER
   5636 
   5637 #if !defined(_VMA_JSON_WRITER) && VMA_STATS_STRING_ENABLED
   5638 /*
   5639 Allows to conveniently build a correct JSON document to be written to the
   5640 VmaStringBuilder passed to the constructor.
   5641 */
   5642 class VmaJsonWriter
   5643 {
   5644     VMA_CLASS_NO_COPY_NO_MOVE(VmaJsonWriter)
   5645 public:
   5646     // sb - string builder to write the document to. Must remain alive for the whole lifetime of this object.
   5647     VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb);
   5648     ~VmaJsonWriter();
   5649 
   5650     // Begins object by writing "{".
   5651     // Inside an object, you must call pairs of WriteString and a value, e.g.:
   5652     // j.BeginObject(true); j.WriteString("A"); j.WriteNumber(1); j.WriteString("B"); j.WriteNumber(2); j.EndObject();
   5653     // Will write: { "A": 1, "B": 2 }
   5654     void BeginObject(bool singleLine = false);
   5655     // Ends object by writing "}".
   5656     void EndObject();
   5657 
   5658     // Begins array by writing "[".
   5659     // Inside an array, you can write a sequence of any values.
   5660     void BeginArray(bool singleLine = false);
   5661     // Ends array by writing "[".
   5662     void EndArray();
   5663 
   5664     // Writes a string value inside "".
   5665     // pStr can contain any ANSI characters, including '"', new line etc. - they will be properly escaped.
   5666     void WriteString(const char* pStr);
   5667 
   5668     // Begins writing a string value.
   5669     // Call BeginString, ContinueString, ContinueString, ..., EndString instead of
   5670     // WriteString to conveniently build the string content incrementally, made of
   5671     // parts including numbers.
   5672     void BeginString(const char* pStr = VMA_NULL);
   5673     // Posts next part of an open string.
   5674     void ContinueString(const char* pStr);
   5675     // Posts next part of an open string. The number is converted to decimal characters.
   5676     void ContinueString(uint32_t n);
   5677     void ContinueString(uint64_t n);
   5678     // Posts next part of an open string. Pointer value is converted to characters
   5679     // using "%p" formatting - shown as hexadecimal number, e.g.: 000000081276Ad00
   5680     void ContinueString_Pointer(const void* ptr);
   5681     // Ends writing a string value by writing '"'.
   5682     void EndString(const char* pStr = VMA_NULL);
   5683 
   5684     // Writes a number value.
   5685     void WriteNumber(uint32_t n);
   5686     void WriteNumber(uint64_t n);
   5687     // Writes a boolean value - false or true.
   5688     void WriteBool(bool b);
   5689     // Writes a null value.
   5690     void WriteNull();
   5691 
   5692 private:
   5693     enum COLLECTION_TYPE
   5694     {
   5695         COLLECTION_TYPE_OBJECT,
   5696         COLLECTION_TYPE_ARRAY,
   5697     };
   5698     struct StackItem
   5699     {
   5700         COLLECTION_TYPE type;
   5701         uint32_t valueCount;
   5702         bool singleLineMode;
   5703     };
   5704 
   5705     static const char* const INDENT;
   5706 
   5707     VmaStringBuilder& m_SB;
   5708     VmaVector< StackItem, VmaStlAllocator<StackItem> > m_Stack;
   5709     bool m_InsideString;
   5710 
   5711     void BeginValue(bool isString);
   5712     void WriteIndent(bool oneLess = false);
   5713 };
   5714 const char* const VmaJsonWriter::INDENT = "  ";
   5715 
   5716 #ifndef _VMA_JSON_WRITER_FUNCTIONS
   5717 VmaJsonWriter::VmaJsonWriter(const VkAllocationCallbacks* pAllocationCallbacks, VmaStringBuilder& sb)
   5718     : m_SB(sb),
   5719     m_Stack(VmaStlAllocator<StackItem>(pAllocationCallbacks)),
   5720     m_InsideString(false) {}
   5721 
   5722 VmaJsonWriter::~VmaJsonWriter()
   5723 {
   5724     VMA_ASSERT(!m_InsideString);
   5725     VMA_ASSERT(m_Stack.empty());
   5726 }
   5727 
   5728 void VmaJsonWriter::BeginObject(bool singleLine)
   5729 {
   5730     VMA_ASSERT(!m_InsideString);
   5731 
   5732     BeginValue(false);
   5733     m_SB.Add('{');
   5734 
   5735     StackItem item;
   5736     item.type = COLLECTION_TYPE_OBJECT;
   5737     item.valueCount = 0;
   5738     item.singleLineMode = singleLine;
   5739     m_Stack.push_back(item);
   5740 }
   5741 
   5742 void VmaJsonWriter::EndObject()
   5743 {
   5744     VMA_ASSERT(!m_InsideString);
   5745 
   5746     WriteIndent(true);
   5747     m_SB.Add('}');
   5748 
   5749     VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_OBJECT);
   5750     m_Stack.pop_back();
   5751 }
   5752 
   5753 void VmaJsonWriter::BeginArray(bool singleLine)
   5754 {
   5755     VMA_ASSERT(!m_InsideString);
   5756 
   5757     BeginValue(false);
   5758     m_SB.Add('[');
   5759 
   5760     StackItem item;
   5761     item.type = COLLECTION_TYPE_ARRAY;
   5762     item.valueCount = 0;
   5763     item.singleLineMode = singleLine;
   5764     m_Stack.push_back(item);
   5765 }
   5766 
   5767 void VmaJsonWriter::EndArray()
   5768 {
   5769     VMA_ASSERT(!m_InsideString);
   5770 
   5771     WriteIndent(true);
   5772     m_SB.Add(']');
   5773 
   5774     VMA_ASSERT(!m_Stack.empty() && m_Stack.back().type == COLLECTION_TYPE_ARRAY);
   5775     m_Stack.pop_back();
   5776 }
   5777 
   5778 void VmaJsonWriter::WriteString(const char* pStr)
   5779 {
   5780     BeginString(pStr);
   5781     EndString();
   5782 }
   5783 
   5784 void VmaJsonWriter::BeginString(const char* pStr)
   5785 {
   5786     VMA_ASSERT(!m_InsideString);
   5787 
   5788     BeginValue(true);
   5789     m_SB.Add('"');
   5790     m_InsideString = true;
   5791     if (pStr != VMA_NULL && pStr[0] != '\0')
   5792     {
   5793         ContinueString(pStr);
   5794     }
   5795 }
   5796 
   5797 void VmaJsonWriter::ContinueString(const char* pStr)
   5798 {
   5799     VMA_ASSERT(m_InsideString);
   5800 
   5801     const size_t strLen = strlen(pStr);
   5802     for (size_t i = 0; i < strLen; ++i)
   5803     {
   5804         char ch = pStr[i];
   5805         if (ch == '\\')
   5806         {
   5807             m_SB.Add("\\\\");
   5808         }
   5809         else if (ch == '"')
   5810         {
   5811             m_SB.Add("\\\"");
   5812         }
   5813         else if ((uint8_t)ch >= 32)
   5814         {
   5815             m_SB.Add(ch);
   5816         }
   5817         else switch (ch)
   5818         {
   5819         case '\b':
   5820             m_SB.Add("\\b");
   5821             break;
   5822         case '\f':
   5823             m_SB.Add("\\f");
   5824             break;
   5825         case '\n':
   5826             m_SB.Add("\\n");
   5827             break;
   5828         case '\r':
   5829             m_SB.Add("\\r");
   5830             break;
   5831         case '\t':
   5832             m_SB.Add("\\t");
   5833             break;
   5834         default:
   5835             VMA_ASSERT(0 && "Character not currently supported.");
   5836         }
   5837     }
   5838 }
   5839 
   5840 void VmaJsonWriter::ContinueString(uint32_t n)
   5841 {
   5842     VMA_ASSERT(m_InsideString);
   5843     m_SB.AddNumber(n);
   5844 }
   5845 
   5846 void VmaJsonWriter::ContinueString(uint64_t n)
   5847 {
   5848     VMA_ASSERT(m_InsideString);
   5849     m_SB.AddNumber(n);
   5850 }
   5851 
   5852 void VmaJsonWriter::ContinueString_Pointer(const void* ptr)
   5853 {
   5854     VMA_ASSERT(m_InsideString);
   5855     m_SB.AddPointer(ptr);
   5856 }
   5857 
   5858 void VmaJsonWriter::EndString(const char* pStr)
   5859 {
   5860     VMA_ASSERT(m_InsideString);
   5861     if (pStr != VMA_NULL && pStr[0] != '\0')
   5862     {
   5863         ContinueString(pStr);
   5864     }
   5865     m_SB.Add('"');
   5866     m_InsideString = false;
   5867 }
   5868 
   5869 void VmaJsonWriter::WriteNumber(uint32_t n)
   5870 {
   5871     VMA_ASSERT(!m_InsideString);
   5872     BeginValue(false);
   5873     m_SB.AddNumber(n);
   5874 }
   5875 
   5876 void VmaJsonWriter::WriteNumber(uint64_t n)
   5877 {
   5878     VMA_ASSERT(!m_InsideString);
   5879     BeginValue(false);
   5880     m_SB.AddNumber(n);
   5881 }
   5882 
   5883 void VmaJsonWriter::WriteBool(bool b)
   5884 {
   5885     VMA_ASSERT(!m_InsideString);
   5886     BeginValue(false);
   5887     m_SB.Add(b ? "true" : "false");
   5888 }
   5889 
   5890 void VmaJsonWriter::WriteNull()
   5891 {
   5892     VMA_ASSERT(!m_InsideString);
   5893     BeginValue(false);
   5894     m_SB.Add("null");
   5895 }
   5896 
   5897 void VmaJsonWriter::BeginValue(bool isString)
   5898 {
   5899     if (!m_Stack.empty())
   5900     {
   5901         StackItem& currItem = m_Stack.back();
   5902         if (currItem.type == COLLECTION_TYPE_OBJECT &&
   5903             currItem.valueCount % 2 == 0)
   5904         {
   5905             VMA_ASSERT(isString);
   5906         }
   5907 
   5908         if (currItem.type == COLLECTION_TYPE_OBJECT &&
   5909             currItem.valueCount % 2 != 0)
   5910         {
   5911             m_SB.Add(": ");
   5912         }
   5913         else if (currItem.valueCount > 0)
   5914         {
   5915             m_SB.Add(", ");
   5916             WriteIndent();
   5917         }
   5918         else
   5919         {
   5920             WriteIndent();
   5921         }
   5922         ++currItem.valueCount;
   5923     }
   5924 }
   5925 
   5926 void VmaJsonWriter::WriteIndent(bool oneLess)
   5927 {
   5928     if (!m_Stack.empty() && !m_Stack.back().singleLineMode)
   5929     {
   5930         m_SB.AddNewLine();
   5931 
   5932         size_t count = m_Stack.size();
   5933         if (count > 0 && oneLess)
   5934         {
   5935             --count;
   5936         }
   5937         for (size_t i = 0; i < count; ++i)
   5938         {
   5939             m_SB.Add(INDENT);
   5940         }
   5941     }
   5942 }
   5943 #endif // _VMA_JSON_WRITER_FUNCTIONS
   5944 
   5945 static void VmaPrintDetailedStatistics(VmaJsonWriter& json, const VmaDetailedStatistics& stat)
   5946 {
   5947     json.BeginObject();
   5948 
   5949     json.WriteString("BlockCount");
   5950     json.WriteNumber(stat.statistics.blockCount);
   5951     json.WriteString("BlockBytes");
   5952     json.WriteNumber(stat.statistics.blockBytes);
   5953     json.WriteString("AllocationCount");
   5954     json.WriteNumber(stat.statistics.allocationCount);
   5955     json.WriteString("AllocationBytes");
   5956     json.WriteNumber(stat.statistics.allocationBytes);
   5957     json.WriteString("UnusedRangeCount");
   5958     json.WriteNumber(stat.unusedRangeCount);
   5959 
   5960     if (stat.statistics.allocationCount > 1)
   5961     {
   5962         json.WriteString("AllocationSizeMin");
   5963         json.WriteNumber(stat.allocationSizeMin);
   5964         json.WriteString("AllocationSizeMax");
   5965         json.WriteNumber(stat.allocationSizeMax);
   5966     }
   5967     if (stat.unusedRangeCount > 1)
   5968     {
   5969         json.WriteString("UnusedRangeSizeMin");
   5970         json.WriteNumber(stat.unusedRangeSizeMin);
   5971         json.WriteString("UnusedRangeSizeMax");
   5972         json.WriteNumber(stat.unusedRangeSizeMax);
   5973     }
   5974     json.EndObject();
   5975 }
   5976 #endif // _VMA_JSON_WRITER
   5977 
   5978 #ifndef _VMA_MAPPING_HYSTERESIS
   5979 
   5980 class VmaMappingHysteresis
   5981 {
   5982     VMA_CLASS_NO_COPY_NO_MOVE(VmaMappingHysteresis)
   5983 public:
   5984     VmaMappingHysteresis() = default;
   5985 
   5986     uint32_t GetExtraMapping() const { return m_ExtraMapping; }
   5987 
   5988     // Call when Map was called.
   5989     // Returns true if switched to extra +1 mapping reference count.
   5990     bool PostMap()
   5991     {
   5992 #if VMA_MAPPING_HYSTERESIS_ENABLED
   5993         if(m_ExtraMapping == 0)
   5994         {
   5995             ++m_MajorCounter;
   5996             if(m_MajorCounter >= COUNTER_MIN_EXTRA_MAPPING)
   5997             {
   5998                 m_ExtraMapping = 1;
   5999                 m_MajorCounter = 0;
   6000                 m_MinorCounter = 0;
   6001                 return true;
   6002             }
   6003         }
   6004         else // m_ExtraMapping == 1
   6005             PostMinorCounter();
   6006 #endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
   6007         return false;
   6008     }
   6009 
   6010     // Call when Unmap was called.
   6011     void PostUnmap()
   6012     {
   6013 #if VMA_MAPPING_HYSTERESIS_ENABLED
   6014         if(m_ExtraMapping == 0)
   6015             ++m_MajorCounter;
   6016         else // m_ExtraMapping == 1
   6017             PostMinorCounter();
   6018 #endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
   6019     }
   6020 
   6021     // Call when allocation was made from the memory block.
   6022     void PostAlloc()
   6023     {
   6024 #if VMA_MAPPING_HYSTERESIS_ENABLED
   6025         if(m_ExtraMapping == 1)
   6026             ++m_MajorCounter;
   6027         else // m_ExtraMapping == 0
   6028             PostMinorCounter();
   6029 #endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
   6030     }
   6031 
   6032     // Call when allocation was freed from the memory block.
   6033     // Returns true if switched to extra -1 mapping reference count.
   6034     bool PostFree()
   6035     {
   6036 #if VMA_MAPPING_HYSTERESIS_ENABLED
   6037         if(m_ExtraMapping == 1)
   6038         {
   6039             ++m_MajorCounter;
   6040             if(m_MajorCounter >= COUNTER_MIN_EXTRA_MAPPING &&
   6041                 m_MajorCounter > m_MinorCounter + 1)
   6042             {
   6043                 m_ExtraMapping = 0;
   6044                 m_MajorCounter = 0;
   6045                 m_MinorCounter = 0;
   6046                 return true;
   6047             }
   6048         }
   6049         else // m_ExtraMapping == 0
   6050             PostMinorCounter();
   6051 #endif // #if VMA_MAPPING_HYSTERESIS_ENABLED
   6052         return false;
   6053     }
   6054 
   6055 private:
   6056     static const int32_t COUNTER_MIN_EXTRA_MAPPING = 7;
   6057 
   6058     uint32_t m_MinorCounter = 0;
   6059     uint32_t m_MajorCounter = 0;
   6060     uint32_t m_ExtraMapping = 0; // 0 or 1.
   6061 
   6062     void PostMinorCounter()
   6063     {
   6064         if(m_MinorCounter < m_MajorCounter)
   6065         {
   6066             ++m_MinorCounter;
   6067         }
   6068         else if(m_MajorCounter > 0)
   6069         {
   6070             --m_MajorCounter;
   6071             --m_MinorCounter;
   6072         }
   6073     }
   6074 };
   6075 
   6076 #endif // _VMA_MAPPING_HYSTERESIS
   6077 
   6078 #ifndef _VMA_DEVICE_MEMORY_BLOCK
   6079 /*
   6080 Represents a single block of device memory (`VkDeviceMemory`) with all the
   6081 data about its regions (aka suballocations, #VmaAllocation), assigned and free.
   6082 
   6083 Thread-safety:
   6084 - Access to m_pMetadata must be externally synchronized.
   6085 - Map, Unmap, Bind* are synchronized internally.
   6086 */
   6087 class VmaDeviceMemoryBlock
   6088 {
   6089     VMA_CLASS_NO_COPY_NO_MOVE(VmaDeviceMemoryBlock)
   6090 public:
   6091     VmaBlockMetadata* m_pMetadata;
   6092 
   6093     VmaDeviceMemoryBlock(VmaAllocator hAllocator);
   6094     ~VmaDeviceMemoryBlock();
   6095 
   6096     // Always call after construction.
   6097     void Init(
   6098         VmaAllocator hAllocator,
   6099         VmaPool hParentPool,
   6100         uint32_t newMemoryTypeIndex,
   6101         VkDeviceMemory newMemory,
   6102         VkDeviceSize newSize,
   6103         uint32_t id,
   6104         uint32_t algorithm,
   6105         VkDeviceSize bufferImageGranularity);
   6106     // Always call before destruction.
   6107     void Destroy(VmaAllocator allocator);
   6108 
   6109     VmaPool GetParentPool() const { return m_hParentPool; }
   6110     VkDeviceMemory GetDeviceMemory() const { return m_hMemory; }
   6111     uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
   6112     uint32_t GetId() const { return m_Id; }
   6113     void* GetMappedData() const { return m_pMappedData; }
   6114     uint32_t GetMapRefCount() const { return m_MapCount; }
   6115 
   6116     // Call when allocation/free was made from m_pMetadata.
   6117     // Used for m_MappingHysteresis.
   6118     void PostAlloc(VmaAllocator hAllocator);
   6119     void PostFree(VmaAllocator hAllocator);
   6120 
   6121     // Validates all data structures inside this object. If not valid, returns false.
   6122     bool Validate() const;
   6123     VkResult CheckCorruption(VmaAllocator hAllocator);
   6124 
   6125     // ppData can be null.
   6126     VkResult Map(VmaAllocator hAllocator, uint32_t count, void** ppData);
   6127     void Unmap(VmaAllocator hAllocator, uint32_t count);
   6128 
   6129     VkResult WriteMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize);
   6130     VkResult ValidateMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize);
   6131 
   6132     VkResult BindBufferMemory(
   6133         const VmaAllocator hAllocator,
   6134         const VmaAllocation hAllocation,
   6135         VkDeviceSize allocationLocalOffset,
   6136         VkBuffer hBuffer,
   6137         const void* pNext);
   6138     VkResult BindImageMemory(
   6139         const VmaAllocator hAllocator,
   6140         const VmaAllocation hAllocation,
   6141         VkDeviceSize allocationLocalOffset,
   6142         VkImage hImage,
   6143         const void* pNext);
   6144 
   6145 private:
   6146     VmaPool m_hParentPool; // VK_NULL_HANDLE if not belongs to custom pool.
   6147     uint32_t m_MemoryTypeIndex;
   6148     uint32_t m_Id;
   6149     VkDeviceMemory m_hMemory;
   6150 
   6151     /*
   6152     Protects access to m_hMemory so it is not used by multiple threads simultaneously, e.g. vkMapMemory, vkBindBufferMemory.
   6153     Also protects m_MapCount, m_pMappedData.
   6154     Allocations, deallocations, any change in m_pMetadata is protected by parent's VmaBlockVector::m_Mutex.
   6155     */
   6156     VMA_MUTEX m_MapAndBindMutex;
   6157     VmaMappingHysteresis m_MappingHysteresis;
   6158     uint32_t m_MapCount;
   6159     void* m_pMappedData;
   6160 };
   6161 #endif // _VMA_DEVICE_MEMORY_BLOCK
   6162 
   6163 #ifndef _VMA_ALLOCATION_T
   6164 struct VmaAllocation_T
   6165 {
   6166     friend struct VmaDedicatedAllocationListItemTraits;
   6167 
   6168     enum FLAGS
   6169     {
   6170         FLAG_PERSISTENT_MAP   = 0x01,
   6171         FLAG_MAPPING_ALLOWED  = 0x02,
   6172     };
   6173 
   6174 public:
   6175     enum ALLOCATION_TYPE
   6176     {
   6177         ALLOCATION_TYPE_NONE,
   6178         ALLOCATION_TYPE_BLOCK,
   6179         ALLOCATION_TYPE_DEDICATED,
   6180     };
   6181 
   6182     // This struct is allocated using VmaPoolAllocator.
   6183     VmaAllocation_T(bool mappingAllowed);
   6184     ~VmaAllocation_T();
   6185 
   6186     void InitBlockAllocation(
   6187         VmaDeviceMemoryBlock* block,
   6188         VmaAllocHandle allocHandle,
   6189         VkDeviceSize alignment,
   6190         VkDeviceSize size,
   6191         uint32_t memoryTypeIndex,
   6192         VmaSuballocationType suballocationType,
   6193         bool mapped);
   6194     // pMappedData not null means allocation is created with MAPPED flag.
   6195     void InitDedicatedAllocation(
   6196         VmaPool hParentPool,
   6197         uint32_t memoryTypeIndex,
   6198         VkDeviceMemory hMemory,
   6199         VmaSuballocationType suballocationType,
   6200         void* pMappedData,
   6201         VkDeviceSize size);
   6202 
   6203     ALLOCATION_TYPE GetType() const { return (ALLOCATION_TYPE)m_Type; }
   6204     VkDeviceSize GetAlignment() const { return m_Alignment; }
   6205     VkDeviceSize GetSize() const { return m_Size; }
   6206     void* GetUserData() const { return m_pUserData; }
   6207     const char* GetName() const { return m_pName; }
   6208     VmaSuballocationType GetSuballocationType() const { return (VmaSuballocationType)m_SuballocationType; }
   6209 
   6210     VmaDeviceMemoryBlock* GetBlock() const { VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK); return m_BlockAllocation.m_Block; }
   6211     uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
   6212     bool IsPersistentMap() const { return (m_Flags & FLAG_PERSISTENT_MAP) != 0; }
   6213     bool IsMappingAllowed() const { return (m_Flags & FLAG_MAPPING_ALLOWED) != 0; }
   6214 
   6215     void SetUserData(VmaAllocator hAllocator, void* pUserData) { m_pUserData = pUserData; }
   6216     void SetName(VmaAllocator hAllocator, const char* pName);
   6217     void FreeName(VmaAllocator hAllocator);
   6218     uint8_t SwapBlockAllocation(VmaAllocator hAllocator, VmaAllocation allocation);
   6219     VmaAllocHandle GetAllocHandle() const;
   6220     VkDeviceSize GetOffset() const;
   6221     VmaPool GetParentPool() const;
   6222     VkDeviceMemory GetMemory() const;
   6223     void* GetMappedData() const;
   6224 
   6225     void BlockAllocMap();
   6226     void BlockAllocUnmap();
   6227     VkResult DedicatedAllocMap(VmaAllocator hAllocator, void** ppData);
   6228     void DedicatedAllocUnmap(VmaAllocator hAllocator);
   6229 
   6230 #if VMA_STATS_STRING_ENABLED
   6231     VmaBufferImageUsage GetBufferImageUsage() const { return m_BufferImageUsage; }
   6232     void InitBufferUsage(const VkBufferCreateInfo &createInfo, bool useKhrMaintenance5)
   6233     {
   6234         VMA_ASSERT(m_BufferImageUsage == VmaBufferImageUsage::UNKNOWN);
   6235         m_BufferImageUsage = VmaBufferImageUsage(createInfo, useKhrMaintenance5);
   6236     }
   6237     void InitImageUsage(const VkImageCreateInfo &createInfo)
   6238     {
   6239         VMA_ASSERT(m_BufferImageUsage == VmaBufferImageUsage::UNKNOWN);
   6240         m_BufferImageUsage = VmaBufferImageUsage(createInfo);
   6241     }
   6242     void PrintParameters(class VmaJsonWriter& json) const;
   6243 #endif
   6244 
   6245 private:
   6246     // Allocation out of VmaDeviceMemoryBlock.
   6247     struct BlockAllocation
   6248     {
   6249         VmaDeviceMemoryBlock* m_Block;
   6250         VmaAllocHandle m_AllocHandle;
   6251     };
   6252     // Allocation for an object that has its own private VkDeviceMemory.
   6253     struct DedicatedAllocation
   6254     {
   6255         VmaPool m_hParentPool; // VK_NULL_HANDLE if not belongs to custom pool.
   6256         VkDeviceMemory m_hMemory;
   6257         void* m_pMappedData; // Not null means memory is mapped.
   6258         VmaAllocation_T* m_Prev;
   6259         VmaAllocation_T* m_Next;
   6260     };
   6261     union
   6262     {
   6263         // Allocation out of VmaDeviceMemoryBlock.
   6264         BlockAllocation m_BlockAllocation;
   6265         // Allocation for an object that has its own private VkDeviceMemory.
   6266         DedicatedAllocation m_DedicatedAllocation;
   6267     };
   6268 
   6269     VkDeviceSize m_Alignment;
   6270     VkDeviceSize m_Size;
   6271     void* m_pUserData;
   6272     char* m_pName;
   6273     uint32_t m_MemoryTypeIndex;
   6274     uint8_t m_Type; // ALLOCATION_TYPE
   6275     uint8_t m_SuballocationType; // VmaSuballocationType
   6276     // Reference counter for vmaMapMemory()/vmaUnmapMemory().
   6277     uint8_t m_MapCount;
   6278     uint8_t m_Flags; // enum FLAGS
   6279 #if VMA_STATS_STRING_ENABLED
   6280     VmaBufferImageUsage m_BufferImageUsage; // 0 if unknown.
   6281 #endif
   6282 };
   6283 #endif // _VMA_ALLOCATION_T
   6284 
   6285 #ifndef _VMA_DEDICATED_ALLOCATION_LIST_ITEM_TRAITS
   6286 struct VmaDedicatedAllocationListItemTraits
   6287 {
   6288     typedef VmaAllocation_T ItemType;
   6289 
   6290     static ItemType* GetPrev(const ItemType* item)
   6291     {
   6292         VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
   6293         return item->m_DedicatedAllocation.m_Prev;
   6294     }
   6295     static ItemType* GetNext(const ItemType* item)
   6296     {
   6297         VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
   6298         return item->m_DedicatedAllocation.m_Next;
   6299     }
   6300     static ItemType*& AccessPrev(ItemType* item)
   6301     {
   6302         VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
   6303         return item->m_DedicatedAllocation.m_Prev;
   6304     }
   6305     static ItemType*& AccessNext(ItemType* item)
   6306     {
   6307         VMA_HEAVY_ASSERT(item->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
   6308         return item->m_DedicatedAllocation.m_Next;
   6309     }
   6310 };
   6311 #endif // _VMA_DEDICATED_ALLOCATION_LIST_ITEM_TRAITS
   6312 
   6313 #ifndef _VMA_DEDICATED_ALLOCATION_LIST
   6314 /*
   6315 Stores linked list of VmaAllocation_T objects.
   6316 Thread-safe, synchronized internally.
   6317 */
   6318 class VmaDedicatedAllocationList
   6319 {
   6320     VMA_CLASS_NO_COPY_NO_MOVE(VmaDedicatedAllocationList)
   6321 public:
   6322     VmaDedicatedAllocationList() {}
   6323     ~VmaDedicatedAllocationList();
   6324 
   6325     void Init(bool useMutex) { m_UseMutex = useMutex; }
   6326     bool Validate();
   6327 
   6328     void AddDetailedStatistics(VmaDetailedStatistics& inoutStats);
   6329     void AddStatistics(VmaStatistics& inoutStats);
   6330 #if VMA_STATS_STRING_ENABLED
   6331     // Writes JSON array with the list of allocations.
   6332     void BuildStatsString(VmaJsonWriter& json);
   6333 #endif
   6334 
   6335     bool IsEmpty();
   6336     void Register(VmaAllocation alloc);
   6337     void Unregister(VmaAllocation alloc);
   6338 
   6339 private:
   6340     typedef VmaIntrusiveLinkedList<VmaDedicatedAllocationListItemTraits> DedicatedAllocationLinkedList;
   6341 
   6342     bool m_UseMutex = true;
   6343     VMA_RW_MUTEX m_Mutex;
   6344     DedicatedAllocationLinkedList m_AllocationList;
   6345 };
   6346 
   6347 #ifndef _VMA_DEDICATED_ALLOCATION_LIST_FUNCTIONS
   6348 
   6349 VmaDedicatedAllocationList::~VmaDedicatedAllocationList()
   6350 {
   6351     VMA_HEAVY_ASSERT(Validate());
   6352 
   6353     if (!m_AllocationList.IsEmpty())
   6354     {
   6355         VMA_ASSERT_LEAK(false && "Unfreed dedicated allocations found!");
   6356     }
   6357 }
   6358 
   6359 bool VmaDedicatedAllocationList::Validate()
   6360 {
   6361     const size_t declaredCount = m_AllocationList.GetCount();
   6362     size_t actualCount = 0;
   6363     VmaMutexLockRead lock(m_Mutex, m_UseMutex);
   6364     for (VmaAllocation alloc = m_AllocationList.Front();
   6365         alloc != VMA_NULL; alloc = m_AllocationList.GetNext(alloc))
   6366     {
   6367         ++actualCount;
   6368     }
   6369     VMA_VALIDATE(actualCount == declaredCount);
   6370 
   6371     return true;
   6372 }
   6373 
   6374 void VmaDedicatedAllocationList::AddDetailedStatistics(VmaDetailedStatistics& inoutStats)
   6375 {
   6376     for(auto* item = m_AllocationList.Front(); item != VMA_NULL; item = DedicatedAllocationLinkedList::GetNext(item))
   6377     {
   6378         const VkDeviceSize size = item->GetSize();
   6379         inoutStats.statistics.blockCount++;
   6380         inoutStats.statistics.blockBytes += size;
   6381         VmaAddDetailedStatisticsAllocation(inoutStats, item->GetSize());
   6382     }
   6383 }
   6384 
   6385 void VmaDedicatedAllocationList::AddStatistics(VmaStatistics& inoutStats)
   6386 {
   6387     VmaMutexLockRead lock(m_Mutex, m_UseMutex);
   6388 
   6389     const uint32_t allocCount = (uint32_t)m_AllocationList.GetCount();
   6390     inoutStats.blockCount += allocCount;
   6391     inoutStats.allocationCount += allocCount;
   6392 
   6393     for(auto* item = m_AllocationList.Front(); item != VMA_NULL; item = DedicatedAllocationLinkedList::GetNext(item))
   6394     {
   6395         const VkDeviceSize size = item->GetSize();
   6396         inoutStats.blockBytes += size;
   6397         inoutStats.allocationBytes += size;
   6398     }
   6399 }
   6400 
   6401 #if VMA_STATS_STRING_ENABLED
   6402 void VmaDedicatedAllocationList::BuildStatsString(VmaJsonWriter& json)
   6403 {
   6404     VmaMutexLockRead lock(m_Mutex, m_UseMutex);
   6405     json.BeginArray();
   6406     for (VmaAllocation alloc = m_AllocationList.Front();
   6407         alloc != VMA_NULL; alloc = m_AllocationList.GetNext(alloc))
   6408     {
   6409         json.BeginObject(true);
   6410         alloc->PrintParameters(json);
   6411         json.EndObject();
   6412     }
   6413     json.EndArray();
   6414 }
   6415 #endif // VMA_STATS_STRING_ENABLED
   6416 
   6417 bool VmaDedicatedAllocationList::IsEmpty()
   6418 {
   6419     VmaMutexLockRead lock(m_Mutex, m_UseMutex);
   6420     return m_AllocationList.IsEmpty();
   6421 }
   6422 
   6423 void VmaDedicatedAllocationList::Register(VmaAllocation alloc)
   6424 {
   6425     VmaMutexLockWrite lock(m_Mutex, m_UseMutex);
   6426     m_AllocationList.PushBack(alloc);
   6427 }
   6428 
   6429 void VmaDedicatedAllocationList::Unregister(VmaAllocation alloc)
   6430 {
   6431     VmaMutexLockWrite lock(m_Mutex, m_UseMutex);
   6432     m_AllocationList.Remove(alloc);
   6433 }
   6434 #endif // _VMA_DEDICATED_ALLOCATION_LIST_FUNCTIONS
   6435 #endif // _VMA_DEDICATED_ALLOCATION_LIST
   6436 
   6437 #ifndef _VMA_SUBALLOCATION
   6438 /*
   6439 Represents a region of VmaDeviceMemoryBlock that is either assigned and returned as
   6440 allocated memory block or free.
   6441 */
   6442 struct VmaSuballocation
   6443 {
   6444     VkDeviceSize offset;
   6445     VkDeviceSize size;
   6446     void* userData;
   6447     VmaSuballocationType type;
   6448 };
   6449 
   6450 // Comparator for offsets.
   6451 struct VmaSuballocationOffsetLess
   6452 {
   6453     bool operator()(const VmaSuballocation& lhs, const VmaSuballocation& rhs) const
   6454     {
   6455         return lhs.offset < rhs.offset;
   6456     }
   6457 };
   6458 
   6459 struct VmaSuballocationOffsetGreater
   6460 {
   6461     bool operator()(const VmaSuballocation& lhs, const VmaSuballocation& rhs) const
   6462     {
   6463         return lhs.offset > rhs.offset;
   6464     }
   6465 };
   6466 
   6467 struct VmaSuballocationItemSizeLess
   6468 {
   6469     bool operator()(const VmaSuballocationList::iterator lhs,
   6470         const VmaSuballocationList::iterator rhs) const
   6471     {
   6472         return lhs->size < rhs->size;
   6473     }
   6474 
   6475     bool operator()(const VmaSuballocationList::iterator lhs,
   6476         VkDeviceSize rhsSize) const
   6477     {
   6478         return lhs->size < rhsSize;
   6479     }
   6480 };
   6481 #endif // _VMA_SUBALLOCATION
   6482 
   6483 #ifndef _VMA_ALLOCATION_REQUEST
   6484 /*
   6485 Parameters of planned allocation inside a VmaDeviceMemoryBlock.
   6486 item points to a FREE suballocation.
   6487 */
   6488 struct VmaAllocationRequest
   6489 {
   6490     VmaAllocHandle allocHandle;
   6491     VkDeviceSize size;
   6492     VmaSuballocationList::iterator item;
   6493     void* customData;
   6494     uint64_t algorithmData;
   6495     VmaAllocationRequestType type;
   6496 };
   6497 #endif // _VMA_ALLOCATION_REQUEST
   6498 
   6499 #ifndef _VMA_BLOCK_METADATA
   6500 /*
   6501 Data structure used for bookkeeping of allocations and unused ranges of memory
   6502 in a single VkDeviceMemory block.
   6503 */
   6504 class VmaBlockMetadata
   6505 {
   6506     VMA_CLASS_NO_COPY_NO_MOVE(VmaBlockMetadata)
   6507 public:
   6508     // pAllocationCallbacks, if not null, must be owned externally - alive and unchanged for the whole lifetime of this object.
   6509     VmaBlockMetadata(const VkAllocationCallbacks* pAllocationCallbacks,
   6510         VkDeviceSize bufferImageGranularity, bool isVirtual);
   6511     virtual ~VmaBlockMetadata() = default;
   6512 
   6513     virtual void Init(VkDeviceSize size) { m_Size = size; }
   6514     bool IsVirtual() const { return m_IsVirtual; }
   6515     VkDeviceSize GetSize() const { return m_Size; }
   6516 
   6517     // Validates all data structures inside this object. If not valid, returns false.
   6518     virtual bool Validate() const = 0;
   6519     virtual size_t GetAllocationCount() const = 0;
   6520     virtual size_t GetFreeRegionsCount() const = 0;
   6521     virtual VkDeviceSize GetSumFreeSize() const = 0;
   6522     // Returns true if this block is empty - contains only single free suballocation.
   6523     virtual bool IsEmpty() const = 0;
   6524     virtual void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) = 0;
   6525     virtual VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const = 0;
   6526     virtual void* GetAllocationUserData(VmaAllocHandle allocHandle) const = 0;
   6527 
   6528     virtual VmaAllocHandle GetAllocationListBegin() const = 0;
   6529     virtual VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const = 0;
   6530     virtual VkDeviceSize GetNextFreeRegionSize(VmaAllocHandle alloc) const = 0;
   6531 
   6532     // Shouldn't modify blockCount.
   6533     virtual void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const = 0;
   6534     virtual void AddStatistics(VmaStatistics& inoutStats) const = 0;
   6535 
   6536 #if VMA_STATS_STRING_ENABLED
   6537     virtual void PrintDetailedMap(class VmaJsonWriter& json) const = 0;
   6538 #endif
   6539 
   6540     // Tries to find a place for suballocation with given parameters inside this block.
   6541     // If succeeded, fills pAllocationRequest and returns true.
   6542     // If failed, returns false.
   6543     virtual bool CreateAllocationRequest(
   6544         VkDeviceSize allocSize,
   6545         VkDeviceSize allocAlignment,
   6546         bool upperAddress,
   6547         VmaSuballocationType allocType,
   6548         // Always one of VMA_ALLOCATION_CREATE_STRATEGY_* or VMA_ALLOCATION_INTERNAL_STRATEGY_* flags.
   6549         uint32_t strategy,
   6550         VmaAllocationRequest* pAllocationRequest) = 0;
   6551 
   6552     virtual VkResult CheckCorruption(const void* pBlockData) = 0;
   6553 
   6554     // Makes actual allocation based on request. Request must already be checked and valid.
   6555     virtual void Alloc(
   6556         const VmaAllocationRequest& request,
   6557         VmaSuballocationType type,
   6558         void* userData) = 0;
   6559 
   6560     // Frees suballocation assigned to given memory region.
   6561     virtual void Free(VmaAllocHandle allocHandle) = 0;
   6562 
   6563     // Frees all allocations.
   6564     // Careful! Don't call it if there are VmaAllocation objects owned by userData of cleared allocations!
   6565     virtual void Clear() = 0;
   6566 
   6567     virtual void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) = 0;
   6568     virtual void DebugLogAllAllocations() const = 0;
   6569 
   6570 protected:
   6571     const VkAllocationCallbacks* GetAllocationCallbacks() const { return m_pAllocationCallbacks; }
   6572     VkDeviceSize GetBufferImageGranularity() const { return m_BufferImageGranularity; }
   6573     VkDeviceSize GetDebugMargin() const { return VkDeviceSize(IsVirtual() ? 0 : VMA_DEBUG_MARGIN); }
   6574 
   6575     void DebugLogAllocation(VkDeviceSize offset, VkDeviceSize size, void* userData) const;
   6576 #if VMA_STATS_STRING_ENABLED
   6577     // mapRefCount == UINT32_MAX means unspecified.
   6578     void PrintDetailedMap_Begin(class VmaJsonWriter& json,
   6579         VkDeviceSize unusedBytes,
   6580         size_t allocationCount,
   6581         size_t unusedRangeCount) const;
   6582     void PrintDetailedMap_Allocation(class VmaJsonWriter& json,
   6583         VkDeviceSize offset, VkDeviceSize size, void* userData) const;
   6584     void PrintDetailedMap_UnusedRange(class VmaJsonWriter& json,
   6585         VkDeviceSize offset,
   6586         VkDeviceSize size) const;
   6587     void PrintDetailedMap_End(class VmaJsonWriter& json) const;
   6588 #endif
   6589 
   6590 private:
   6591     VkDeviceSize m_Size;
   6592     const VkAllocationCallbacks* m_pAllocationCallbacks;
   6593     const VkDeviceSize m_BufferImageGranularity;
   6594     const bool m_IsVirtual;
   6595 };
   6596 
   6597 #ifndef _VMA_BLOCK_METADATA_FUNCTIONS
   6598 VmaBlockMetadata::VmaBlockMetadata(const VkAllocationCallbacks* pAllocationCallbacks,
   6599     VkDeviceSize bufferImageGranularity, bool isVirtual)
   6600     : m_Size(0),
   6601     m_pAllocationCallbacks(pAllocationCallbacks),
   6602     m_BufferImageGranularity(bufferImageGranularity),
   6603     m_IsVirtual(isVirtual) {}
   6604 
   6605 void VmaBlockMetadata::DebugLogAllocation(VkDeviceSize offset, VkDeviceSize size, void* userData) const
   6606 {
   6607     if (IsVirtual())
   6608     {
   6609         VMA_LEAK_LOG_FORMAT("UNFREED VIRTUAL ALLOCATION; Offset: %" PRIu64 "; Size: %" PRIu64 "; UserData: %p", offset, size, userData);
   6610     }
   6611     else
   6612     {
   6613         VMA_ASSERT(userData != VMA_NULL);
   6614         VmaAllocation allocation = reinterpret_cast<VmaAllocation>(userData);
   6615 
   6616         userData = allocation->GetUserData();
   6617         const char* name = allocation->GetName();
   6618 
   6619 #if VMA_STATS_STRING_ENABLED
   6620         VMA_LEAK_LOG_FORMAT("UNFREED ALLOCATION; Offset: %" PRIu64 "; Size: %" PRIu64 "; UserData: %p; Name: %s; Type: %s; Usage: %" PRIu64,
   6621             offset, size, userData, name ? name : "vma_empty",
   6622             VMA_SUBALLOCATION_TYPE_NAMES[allocation->GetSuballocationType()],
   6623             (uint64_t)allocation->GetBufferImageUsage().Value);
   6624 #else
   6625         VMA_LEAK_LOG_FORMAT("UNFREED ALLOCATION; Offset: %" PRIu64 "; Size: %" PRIu64 "; UserData: %p; Name: %s; Type: %u",
   6626             offset, size, userData, name ? name : "vma_empty",
   6627             (unsigned)allocation->GetSuballocationType());
   6628 #endif // VMA_STATS_STRING_ENABLED
   6629     }
   6630 
   6631 }
   6632 
   6633 #if VMA_STATS_STRING_ENABLED
   6634 void VmaBlockMetadata::PrintDetailedMap_Begin(class VmaJsonWriter& json,
   6635     VkDeviceSize unusedBytes, size_t allocationCount, size_t unusedRangeCount) const
   6636 {
   6637     json.WriteString("TotalBytes");
   6638     json.WriteNumber(GetSize());
   6639 
   6640     json.WriteString("UnusedBytes");
   6641     json.WriteNumber(unusedBytes);
   6642 
   6643     json.WriteString("Allocations");
   6644     json.WriteNumber((uint64_t)allocationCount);
   6645 
   6646     json.WriteString("UnusedRanges");
   6647     json.WriteNumber((uint64_t)unusedRangeCount);
   6648 
   6649     json.WriteString("Suballocations");
   6650     json.BeginArray();
   6651 }
   6652 
   6653 void VmaBlockMetadata::PrintDetailedMap_Allocation(class VmaJsonWriter& json,
   6654     VkDeviceSize offset, VkDeviceSize size, void* userData) const
   6655 {
   6656     json.BeginObject(true);
   6657 
   6658     json.WriteString("Offset");
   6659     json.WriteNumber(offset);
   6660 
   6661     if (IsVirtual())
   6662     {
   6663         json.WriteString("Size");
   6664         json.WriteNumber(size);
   6665         if (userData)
   6666         {
   6667             json.WriteString("CustomData");
   6668             json.BeginString();
   6669             json.ContinueString_Pointer(userData);
   6670             json.EndString();
   6671         }
   6672     }
   6673     else
   6674     {
   6675         ((VmaAllocation)userData)->PrintParameters(json);
   6676     }
   6677 
   6678     json.EndObject();
   6679 }
   6680 
   6681 void VmaBlockMetadata::PrintDetailedMap_UnusedRange(class VmaJsonWriter& json,
   6682     VkDeviceSize offset, VkDeviceSize size) const
   6683 {
   6684     json.BeginObject(true);
   6685 
   6686     json.WriteString("Offset");
   6687     json.WriteNumber(offset);
   6688 
   6689     json.WriteString("Type");
   6690     json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[VMA_SUBALLOCATION_TYPE_FREE]);
   6691 
   6692     json.WriteString("Size");
   6693     json.WriteNumber(size);
   6694 
   6695     json.EndObject();
   6696 }
   6697 
   6698 void VmaBlockMetadata::PrintDetailedMap_End(class VmaJsonWriter& json) const
   6699 {
   6700     json.EndArray();
   6701 }
   6702 #endif // VMA_STATS_STRING_ENABLED
   6703 #endif // _VMA_BLOCK_METADATA_FUNCTIONS
   6704 #endif // _VMA_BLOCK_METADATA
   6705 
   6706 #ifndef _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY
   6707 // Before deleting object of this class remember to call 'Destroy()'
   6708 class VmaBlockBufferImageGranularity final
   6709 {
   6710 public:
   6711     struct ValidationContext
   6712     {
   6713         const VkAllocationCallbacks* allocCallbacks;
   6714         uint16_t* pageAllocs;
   6715     };
   6716 
   6717     VmaBlockBufferImageGranularity(VkDeviceSize bufferImageGranularity);
   6718     ~VmaBlockBufferImageGranularity();
   6719 
   6720     bool IsEnabled() const { return m_BufferImageGranularity > MAX_LOW_BUFFER_IMAGE_GRANULARITY; }
   6721 
   6722     void Init(const VkAllocationCallbacks* pAllocationCallbacks, VkDeviceSize size);
   6723     // Before destroying object you must call free it's memory
   6724     void Destroy(const VkAllocationCallbacks* pAllocationCallbacks);
   6725 
   6726     void RoundupAllocRequest(VmaSuballocationType allocType,
   6727         VkDeviceSize& inOutAllocSize,
   6728         VkDeviceSize& inOutAllocAlignment) const;
   6729 
   6730     bool CheckConflictAndAlignUp(VkDeviceSize& inOutAllocOffset,
   6731         VkDeviceSize allocSize,
   6732         VkDeviceSize blockOffset,
   6733         VkDeviceSize blockSize,
   6734         VmaSuballocationType allocType) const;
   6735 
   6736     void AllocPages(uint8_t allocType, VkDeviceSize offset, VkDeviceSize size);
   6737     void FreePages(VkDeviceSize offset, VkDeviceSize size);
   6738     void Clear();
   6739 
   6740     ValidationContext StartValidation(const VkAllocationCallbacks* pAllocationCallbacks,
   6741         bool isVirutal) const;
   6742     bool Validate(ValidationContext& ctx, VkDeviceSize offset, VkDeviceSize size) const;
   6743     bool FinishValidation(ValidationContext& ctx) const;
   6744 
   6745 private:
   6746     static const uint16_t MAX_LOW_BUFFER_IMAGE_GRANULARITY = 256;
   6747 
   6748     struct RegionInfo
   6749     {
   6750         uint8_t allocType;
   6751         uint16_t allocCount;
   6752     };
   6753 
   6754     VkDeviceSize m_BufferImageGranularity;
   6755     uint32_t m_RegionCount;
   6756     RegionInfo* m_RegionInfo;
   6757 
   6758     uint32_t GetStartPage(VkDeviceSize offset) const { return OffsetToPageIndex(offset & ~(m_BufferImageGranularity - 1)); }
   6759     uint32_t GetEndPage(VkDeviceSize offset, VkDeviceSize size) const { return OffsetToPageIndex((offset + size - 1) & ~(m_BufferImageGranularity - 1)); }
   6760 
   6761     uint32_t OffsetToPageIndex(VkDeviceSize offset) const;
   6762     void AllocPage(RegionInfo& page, uint8_t allocType);
   6763 };
   6764 
   6765 #ifndef _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY_FUNCTIONS
   6766 VmaBlockBufferImageGranularity::VmaBlockBufferImageGranularity(VkDeviceSize bufferImageGranularity)
   6767     : m_BufferImageGranularity(bufferImageGranularity),
   6768     m_RegionCount(0),
   6769     m_RegionInfo(VMA_NULL) {}
   6770 
   6771 VmaBlockBufferImageGranularity::~VmaBlockBufferImageGranularity()
   6772 {
   6773     VMA_ASSERT(m_RegionInfo == VMA_NULL && "Free not called before destroying object!");
   6774 }
   6775 
   6776 void VmaBlockBufferImageGranularity::Init(const VkAllocationCallbacks* pAllocationCallbacks, VkDeviceSize size)
   6777 {
   6778     if (IsEnabled())
   6779     {
   6780         m_RegionCount = static_cast<uint32_t>(VmaDivideRoundingUp(size, m_BufferImageGranularity));
   6781         m_RegionInfo = vma_new_array(pAllocationCallbacks, RegionInfo, m_RegionCount);
   6782         memset(m_RegionInfo, 0, m_RegionCount * sizeof(RegionInfo));
   6783     }
   6784 }
   6785 
   6786 void VmaBlockBufferImageGranularity::Destroy(const VkAllocationCallbacks* pAllocationCallbacks)
   6787 {
   6788     if (m_RegionInfo)
   6789     {
   6790         vma_delete_array(pAllocationCallbacks, m_RegionInfo, m_RegionCount);
   6791         m_RegionInfo = VMA_NULL;
   6792     }
   6793 }
   6794 
   6795 void VmaBlockBufferImageGranularity::RoundupAllocRequest(VmaSuballocationType allocType,
   6796     VkDeviceSize& inOutAllocSize,
   6797     VkDeviceSize& inOutAllocAlignment) const
   6798 {
   6799     if (m_BufferImageGranularity > 1 &&
   6800         m_BufferImageGranularity <= MAX_LOW_BUFFER_IMAGE_GRANULARITY)
   6801     {
   6802         if (allocType == VMA_SUBALLOCATION_TYPE_UNKNOWN ||
   6803             allocType == VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN ||
   6804             allocType == VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL)
   6805         {
   6806             inOutAllocAlignment = VMA_MAX(inOutAllocAlignment, m_BufferImageGranularity);
   6807             inOutAllocSize = VmaAlignUp(inOutAllocSize, m_BufferImageGranularity);
   6808         }
   6809     }
   6810 }
   6811 
   6812 bool VmaBlockBufferImageGranularity::CheckConflictAndAlignUp(VkDeviceSize& inOutAllocOffset,
   6813     VkDeviceSize allocSize,
   6814     VkDeviceSize blockOffset,
   6815     VkDeviceSize blockSize,
   6816     VmaSuballocationType allocType) const
   6817 {
   6818     if (IsEnabled())
   6819     {
   6820         uint32_t startPage = GetStartPage(inOutAllocOffset);
   6821         if (m_RegionInfo[startPage].allocCount > 0 &&
   6822             VmaIsBufferImageGranularityConflict(static_cast<VmaSuballocationType>(m_RegionInfo[startPage].allocType), allocType))
   6823         {
   6824             inOutAllocOffset = VmaAlignUp(inOutAllocOffset, m_BufferImageGranularity);
   6825             if (blockSize < allocSize + inOutAllocOffset - blockOffset)
   6826                 return true;
   6827             ++startPage;
   6828         }
   6829         uint32_t endPage = GetEndPage(inOutAllocOffset, allocSize);
   6830         if (endPage != startPage &&
   6831             m_RegionInfo[endPage].allocCount > 0 &&
   6832             VmaIsBufferImageGranularityConflict(static_cast<VmaSuballocationType>(m_RegionInfo[endPage].allocType), allocType))
   6833         {
   6834             return true;
   6835         }
   6836     }
   6837     return false;
   6838 }
   6839 
   6840 void VmaBlockBufferImageGranularity::AllocPages(uint8_t allocType, VkDeviceSize offset, VkDeviceSize size)
   6841 {
   6842     if (IsEnabled())
   6843     {
   6844         uint32_t startPage = GetStartPage(offset);
   6845         AllocPage(m_RegionInfo[startPage], allocType);
   6846 
   6847         uint32_t endPage = GetEndPage(offset, size);
   6848         if (startPage != endPage)
   6849             AllocPage(m_RegionInfo[endPage], allocType);
   6850     }
   6851 }
   6852 
   6853 void VmaBlockBufferImageGranularity::FreePages(VkDeviceSize offset, VkDeviceSize size)
   6854 {
   6855     if (IsEnabled())
   6856     {
   6857         uint32_t startPage = GetStartPage(offset);
   6858         --m_RegionInfo[startPage].allocCount;
   6859         if (m_RegionInfo[startPage].allocCount == 0)
   6860             m_RegionInfo[startPage].allocType = VMA_SUBALLOCATION_TYPE_FREE;
   6861         uint32_t endPage = GetEndPage(offset, size);
   6862         if (startPage != endPage)
   6863         {
   6864             --m_RegionInfo[endPage].allocCount;
   6865             if (m_RegionInfo[endPage].allocCount == 0)
   6866                 m_RegionInfo[endPage].allocType = VMA_SUBALLOCATION_TYPE_FREE;
   6867         }
   6868     }
   6869 }
   6870 
   6871 void VmaBlockBufferImageGranularity::Clear()
   6872 {
   6873     if (m_RegionInfo)
   6874         memset(m_RegionInfo, 0, m_RegionCount * sizeof(RegionInfo));
   6875 }
   6876 
   6877 VmaBlockBufferImageGranularity::ValidationContext VmaBlockBufferImageGranularity::StartValidation(
   6878     const VkAllocationCallbacks* pAllocationCallbacks, bool isVirutal) const
   6879 {
   6880     ValidationContext ctx{ pAllocationCallbacks, VMA_NULL };
   6881     if (!isVirutal && IsEnabled())
   6882     {
   6883         ctx.pageAllocs = vma_new_array(pAllocationCallbacks, uint16_t, m_RegionCount);
   6884         memset(ctx.pageAllocs, 0, m_RegionCount * sizeof(uint16_t));
   6885     }
   6886     return ctx;
   6887 }
   6888 
   6889 bool VmaBlockBufferImageGranularity::Validate(ValidationContext& ctx,
   6890     VkDeviceSize offset, VkDeviceSize size) const
   6891 {
   6892     if (IsEnabled())
   6893     {
   6894         uint32_t start = GetStartPage(offset);
   6895         ++ctx.pageAllocs[start];
   6896         VMA_VALIDATE(m_RegionInfo[start].allocCount > 0);
   6897 
   6898         uint32_t end = GetEndPage(offset, size);
   6899         if (start != end)
   6900         {
   6901             ++ctx.pageAllocs[end];
   6902             VMA_VALIDATE(m_RegionInfo[end].allocCount > 0);
   6903         }
   6904     }
   6905     return true;
   6906 }
   6907 
   6908 bool VmaBlockBufferImageGranularity::FinishValidation(ValidationContext& ctx) const
   6909 {
   6910     // Check proper page structure
   6911     if (IsEnabled())
   6912     {
   6913         VMA_ASSERT(ctx.pageAllocs != VMA_NULL && "Validation context not initialized!");
   6914 
   6915         for (uint32_t page = 0; page < m_RegionCount; ++page)
   6916         {
   6917             VMA_VALIDATE(ctx.pageAllocs[page] == m_RegionInfo[page].allocCount);
   6918         }
   6919         vma_delete_array(ctx.allocCallbacks, ctx.pageAllocs, m_RegionCount);
   6920         ctx.pageAllocs = VMA_NULL;
   6921     }
   6922     return true;
   6923 }
   6924 
   6925 uint32_t VmaBlockBufferImageGranularity::OffsetToPageIndex(VkDeviceSize offset) const
   6926 {
   6927     return static_cast<uint32_t>(offset >> VMA_BITSCAN_MSB(m_BufferImageGranularity));
   6928 }
   6929 
   6930 void VmaBlockBufferImageGranularity::AllocPage(RegionInfo& page, uint8_t allocType)
   6931 {
   6932     // When current alloc type is free then it can be overridden by new type
   6933     if (page.allocCount == 0 || (page.allocCount > 0 && page.allocType == VMA_SUBALLOCATION_TYPE_FREE))
   6934         page.allocType = allocType;
   6935 
   6936     ++page.allocCount;
   6937 }
   6938 #endif // _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY_FUNCTIONS
   6939 #endif // _VMA_BLOCK_BUFFER_IMAGE_GRANULARITY
   6940 
   6941 #ifndef _VMA_BLOCK_METADATA_LINEAR
   6942 /*
   6943 Allocations and their references in internal data structure look like this:
   6944 
   6945 if(m_2ndVectorMode == SECOND_VECTOR_EMPTY):
   6946 
   6947         0 +-------+
   6948           |       |
   6949           |       |
   6950           |       |
   6951           +-------+
   6952           | Alloc |  1st[m_1stNullItemsBeginCount]
   6953           +-------+
   6954           | Alloc |  1st[m_1stNullItemsBeginCount + 1]
   6955           +-------+
   6956           |  ...  |
   6957           +-------+
   6958           | Alloc |  1st[1st.size() - 1]
   6959           +-------+
   6960           |       |
   6961           |       |
   6962           |       |
   6963 GetSize() +-------+
   6964 
   6965 if(m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER):
   6966 
   6967         0 +-------+
   6968           | Alloc |  2nd[0]
   6969           +-------+
   6970           | Alloc |  2nd[1]
   6971           +-------+
   6972           |  ...  |
   6973           +-------+
   6974           | Alloc |  2nd[2nd.size() - 1]
   6975           +-------+
   6976           |       |
   6977           |       |
   6978           |       |
   6979           +-------+
   6980           | Alloc |  1st[m_1stNullItemsBeginCount]
   6981           +-------+
   6982           | Alloc |  1st[m_1stNullItemsBeginCount + 1]
   6983           +-------+
   6984           |  ...  |
   6985           +-------+
   6986           | Alloc |  1st[1st.size() - 1]
   6987           +-------+
   6988           |       |
   6989 GetSize() +-------+
   6990 
   6991 if(m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK):
   6992 
   6993         0 +-------+
   6994           |       |
   6995           |       |
   6996           |       |
   6997           +-------+
   6998           | Alloc |  1st[m_1stNullItemsBeginCount]
   6999           +-------+
   7000           | Alloc |  1st[m_1stNullItemsBeginCount + 1]
   7001           +-------+
   7002           |  ...  |
   7003           +-------+
   7004           | Alloc |  1st[1st.size() - 1]
   7005           +-------+
   7006           |       |
   7007           |       |
   7008           |       |
   7009           +-------+
   7010           | Alloc |  2nd[2nd.size() - 1]
   7011           +-------+
   7012           |  ...  |
   7013           +-------+
   7014           | Alloc |  2nd[1]
   7015           +-------+
   7016           | Alloc |  2nd[0]
   7017 GetSize() +-------+
   7018 
   7019 */
   7020 class VmaBlockMetadata_Linear : public VmaBlockMetadata
   7021 {
   7022     VMA_CLASS_NO_COPY_NO_MOVE(VmaBlockMetadata_Linear)
   7023 public:
   7024     VmaBlockMetadata_Linear(const VkAllocationCallbacks* pAllocationCallbacks,
   7025         VkDeviceSize bufferImageGranularity, bool isVirtual);
   7026     virtual ~VmaBlockMetadata_Linear() = default;
   7027 
   7028     VkDeviceSize GetSumFreeSize() const override { return m_SumFreeSize; }
   7029     bool IsEmpty() const override { return GetAllocationCount() == 0; }
   7030     VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const override { return (VkDeviceSize)allocHandle - 1; }
   7031 
   7032     void Init(VkDeviceSize size) override;
   7033     bool Validate() const override;
   7034     size_t GetAllocationCount() const override;
   7035     size_t GetFreeRegionsCount() const override;
   7036 
   7037     void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const override;
   7038     void AddStatistics(VmaStatistics& inoutStats) const override;
   7039 
   7040 #if VMA_STATS_STRING_ENABLED
   7041     void PrintDetailedMap(class VmaJsonWriter& json) const override;
   7042 #endif
   7043 
   7044     bool CreateAllocationRequest(
   7045         VkDeviceSize allocSize,
   7046         VkDeviceSize allocAlignment,
   7047         bool upperAddress,
   7048         VmaSuballocationType allocType,
   7049         uint32_t strategy,
   7050         VmaAllocationRequest* pAllocationRequest) override;
   7051 
   7052     VkResult CheckCorruption(const void* pBlockData) override;
   7053 
   7054     void Alloc(
   7055         const VmaAllocationRequest& request,
   7056         VmaSuballocationType type,
   7057         void* userData) override;
   7058 
   7059     void Free(VmaAllocHandle allocHandle) override;
   7060     void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) override;
   7061     void* GetAllocationUserData(VmaAllocHandle allocHandle) const override;
   7062     VmaAllocHandle GetAllocationListBegin() const override;
   7063     VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const override;
   7064     VkDeviceSize GetNextFreeRegionSize(VmaAllocHandle alloc) const override;
   7065     void Clear() override;
   7066     void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) override;
   7067     void DebugLogAllAllocations() const override;
   7068 
   7069 private:
   7070     /*
   7071     There are two suballocation vectors, used in ping-pong way.
   7072     The one with index m_1stVectorIndex is called 1st.
   7073     The one with index (m_1stVectorIndex ^ 1) is called 2nd.
   7074     2nd can be non-empty only when 1st is not empty.
   7075     When 2nd is not empty, m_2ndVectorMode indicates its mode of operation.
   7076     */
   7077     typedef VmaVector<VmaSuballocation, VmaStlAllocator<VmaSuballocation>> SuballocationVectorType;
   7078 
   7079     enum SECOND_VECTOR_MODE
   7080     {
   7081         SECOND_VECTOR_EMPTY,
   7082         /*
   7083         Suballocations in 2nd vector are created later than the ones in 1st, but they
   7084         all have smaller offset.
   7085         */
   7086         SECOND_VECTOR_RING_BUFFER,
   7087         /*
   7088         Suballocations in 2nd vector are upper side of double stack.
   7089         They all have offsets higher than those in 1st vector.
   7090         Top of this stack means smaller offsets, but higher indices in this vector.
   7091         */
   7092         SECOND_VECTOR_DOUBLE_STACK,
   7093     };
   7094 
   7095     VkDeviceSize m_SumFreeSize;
   7096     SuballocationVectorType m_Suballocations0, m_Suballocations1;
   7097     uint32_t m_1stVectorIndex;
   7098     SECOND_VECTOR_MODE m_2ndVectorMode;
   7099     // Number of items in 1st vector with hAllocation = null at the beginning.
   7100     size_t m_1stNullItemsBeginCount;
   7101     // Number of other items in 1st vector with hAllocation = null somewhere in the middle.
   7102     size_t m_1stNullItemsMiddleCount;
   7103     // Number of items in 2nd vector with hAllocation = null.
   7104     size_t m_2ndNullItemsCount;
   7105 
   7106     SuballocationVectorType& AccessSuballocations1st() { return m_1stVectorIndex ? m_Suballocations1 : m_Suballocations0; }
   7107     SuballocationVectorType& AccessSuballocations2nd() { return m_1stVectorIndex ? m_Suballocations0 : m_Suballocations1; }
   7108     const SuballocationVectorType& AccessSuballocations1st() const { return m_1stVectorIndex ? m_Suballocations1 : m_Suballocations0; }
   7109     const SuballocationVectorType& AccessSuballocations2nd() const { return m_1stVectorIndex ? m_Suballocations0 : m_Suballocations1; }
   7110 
   7111     VmaSuballocation& FindSuballocation(VkDeviceSize offset) const;
   7112     bool ShouldCompact1st() const;
   7113     void CleanupAfterFree();
   7114 
   7115     bool CreateAllocationRequest_LowerAddress(
   7116         VkDeviceSize allocSize,
   7117         VkDeviceSize allocAlignment,
   7118         VmaSuballocationType allocType,
   7119         uint32_t strategy,
   7120         VmaAllocationRequest* pAllocationRequest);
   7121     bool CreateAllocationRequest_UpperAddress(
   7122         VkDeviceSize allocSize,
   7123         VkDeviceSize allocAlignment,
   7124         VmaSuballocationType allocType,
   7125         uint32_t strategy,
   7126         VmaAllocationRequest* pAllocationRequest);
   7127 };
   7128 
   7129 #ifndef _VMA_BLOCK_METADATA_LINEAR_FUNCTIONS
   7130 VmaBlockMetadata_Linear::VmaBlockMetadata_Linear(const VkAllocationCallbacks* pAllocationCallbacks,
   7131     VkDeviceSize bufferImageGranularity, bool isVirtual)
   7132     : VmaBlockMetadata(pAllocationCallbacks, bufferImageGranularity, isVirtual),
   7133     m_SumFreeSize(0),
   7134     m_Suballocations0(VmaStlAllocator<VmaSuballocation>(pAllocationCallbacks)),
   7135     m_Suballocations1(VmaStlAllocator<VmaSuballocation>(pAllocationCallbacks)),
   7136     m_1stVectorIndex(0),
   7137     m_2ndVectorMode(SECOND_VECTOR_EMPTY),
   7138     m_1stNullItemsBeginCount(0),
   7139     m_1stNullItemsMiddleCount(0),
   7140     m_2ndNullItemsCount(0) {}
   7141 
   7142 void VmaBlockMetadata_Linear::Init(VkDeviceSize size)
   7143 {
   7144     VmaBlockMetadata::Init(size);
   7145     m_SumFreeSize = size;
   7146 }
   7147 
   7148 bool VmaBlockMetadata_Linear::Validate() const
   7149 {
   7150     const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
   7151     const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
   7152 
   7153     VMA_VALIDATE(suballocations2nd.empty() == (m_2ndVectorMode == SECOND_VECTOR_EMPTY));
   7154     VMA_VALIDATE(!suballocations1st.empty() ||
   7155         suballocations2nd.empty() ||
   7156         m_2ndVectorMode != SECOND_VECTOR_RING_BUFFER);
   7157 
   7158     if (!suballocations1st.empty())
   7159     {
   7160         // Null item at the beginning should be accounted into m_1stNullItemsBeginCount.
   7161         VMA_VALIDATE(suballocations1st[m_1stNullItemsBeginCount].type != VMA_SUBALLOCATION_TYPE_FREE);
   7162         // Null item at the end should be just pop_back().
   7163         VMA_VALIDATE(suballocations1st.back().type != VMA_SUBALLOCATION_TYPE_FREE);
   7164     }
   7165     if (!suballocations2nd.empty())
   7166     {
   7167         // Null item at the end should be just pop_back().
   7168         VMA_VALIDATE(suballocations2nd.back().type != VMA_SUBALLOCATION_TYPE_FREE);
   7169     }
   7170 
   7171     VMA_VALIDATE(m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount <= suballocations1st.size());
   7172     VMA_VALIDATE(m_2ndNullItemsCount <= suballocations2nd.size());
   7173 
   7174     VkDeviceSize sumUsedSize = 0;
   7175     const size_t suballoc1stCount = suballocations1st.size();
   7176     const VkDeviceSize debugMargin = GetDebugMargin();
   7177     VkDeviceSize offset = 0;
   7178 
   7179     if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
   7180     {
   7181         const size_t suballoc2ndCount = suballocations2nd.size();
   7182         size_t nullItem2ndCount = 0;
   7183         for (size_t i = 0; i < suballoc2ndCount; ++i)
   7184         {
   7185             const VmaSuballocation& suballoc = suballocations2nd[i];
   7186             const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
   7187 
   7188             VmaAllocation const alloc = (VmaAllocation)suballoc.userData;
   7189             if (!IsVirtual())
   7190             {
   7191                 VMA_VALIDATE(currFree == (alloc == VK_NULL_HANDLE));
   7192             }
   7193             VMA_VALIDATE(suballoc.offset >= offset);
   7194 
   7195             if (!currFree)
   7196             {
   7197                 if (!IsVirtual())
   7198                 {
   7199                     VMA_VALIDATE((VkDeviceSize)alloc->GetAllocHandle() == suballoc.offset + 1);
   7200                     VMA_VALIDATE(alloc->GetSize() == suballoc.size);
   7201                 }
   7202                 sumUsedSize += suballoc.size;
   7203             }
   7204             else
   7205             {
   7206                 ++nullItem2ndCount;
   7207             }
   7208 
   7209             offset = suballoc.offset + suballoc.size + debugMargin;
   7210         }
   7211 
   7212         VMA_VALIDATE(nullItem2ndCount == m_2ndNullItemsCount);
   7213     }
   7214 
   7215     for (size_t i = 0; i < m_1stNullItemsBeginCount; ++i)
   7216     {
   7217         const VmaSuballocation& suballoc = suballocations1st[i];
   7218         VMA_VALIDATE(suballoc.type == VMA_SUBALLOCATION_TYPE_FREE &&
   7219             suballoc.userData == VMA_NULL);
   7220     }
   7221 
   7222     size_t nullItem1stCount = m_1stNullItemsBeginCount;
   7223 
   7224     for (size_t i = m_1stNullItemsBeginCount; i < suballoc1stCount; ++i)
   7225     {
   7226         const VmaSuballocation& suballoc = suballocations1st[i];
   7227         const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
   7228 
   7229         VmaAllocation const alloc = (VmaAllocation)suballoc.userData;
   7230         if (!IsVirtual())
   7231         {
   7232             VMA_VALIDATE(currFree == (alloc == VK_NULL_HANDLE));
   7233         }
   7234         VMA_VALIDATE(suballoc.offset >= offset);
   7235         VMA_VALIDATE(i >= m_1stNullItemsBeginCount || currFree);
   7236 
   7237         if (!currFree)
   7238         {
   7239             if (!IsVirtual())
   7240             {
   7241                 VMA_VALIDATE((VkDeviceSize)alloc->GetAllocHandle() == suballoc.offset + 1);
   7242                 VMA_VALIDATE(alloc->GetSize() == suballoc.size);
   7243             }
   7244             sumUsedSize += suballoc.size;
   7245         }
   7246         else
   7247         {
   7248             ++nullItem1stCount;
   7249         }
   7250 
   7251         offset = suballoc.offset + suballoc.size + debugMargin;
   7252     }
   7253     VMA_VALIDATE(nullItem1stCount == m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount);
   7254 
   7255     if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
   7256     {
   7257         const size_t suballoc2ndCount = suballocations2nd.size();
   7258         size_t nullItem2ndCount = 0;
   7259         for (size_t i = suballoc2ndCount; i--; )
   7260         {
   7261             const VmaSuballocation& suballoc = suballocations2nd[i];
   7262             const bool currFree = (suballoc.type == VMA_SUBALLOCATION_TYPE_FREE);
   7263 
   7264             VmaAllocation const alloc = (VmaAllocation)suballoc.userData;
   7265             if (!IsVirtual())
   7266             {
   7267                 VMA_VALIDATE(currFree == (alloc == VK_NULL_HANDLE));
   7268             }
   7269             VMA_VALIDATE(suballoc.offset >= offset);
   7270 
   7271             if (!currFree)
   7272             {
   7273                 if (!IsVirtual())
   7274                 {
   7275                     VMA_VALIDATE((VkDeviceSize)alloc->GetAllocHandle() == suballoc.offset + 1);
   7276                     VMA_VALIDATE(alloc->GetSize() == suballoc.size);
   7277                 }
   7278                 sumUsedSize += suballoc.size;
   7279             }
   7280             else
   7281             {
   7282                 ++nullItem2ndCount;
   7283             }
   7284 
   7285             offset = suballoc.offset + suballoc.size + debugMargin;
   7286         }
   7287 
   7288         VMA_VALIDATE(nullItem2ndCount == m_2ndNullItemsCount);
   7289     }
   7290 
   7291     VMA_VALIDATE(offset <= GetSize());
   7292     VMA_VALIDATE(m_SumFreeSize == GetSize() - sumUsedSize);
   7293 
   7294     return true;
   7295 }
   7296 
   7297 size_t VmaBlockMetadata_Linear::GetAllocationCount() const
   7298 {
   7299     return AccessSuballocations1st().size() - m_1stNullItemsBeginCount - m_1stNullItemsMiddleCount +
   7300         AccessSuballocations2nd().size() - m_2ndNullItemsCount;
   7301 }
   7302 
   7303 size_t VmaBlockMetadata_Linear::GetFreeRegionsCount() const
   7304 {
   7305     // Function only used for defragmentation, which is disabled for this algorithm
   7306     VMA_ASSERT(0);
   7307     return SIZE_MAX;
   7308 }
   7309 
   7310 void VmaBlockMetadata_Linear::AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const
   7311 {
   7312     const VkDeviceSize size = GetSize();
   7313     const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
   7314     const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
   7315     const size_t suballoc1stCount = suballocations1st.size();
   7316     const size_t suballoc2ndCount = suballocations2nd.size();
   7317 
   7318     inoutStats.statistics.blockCount++;
   7319     inoutStats.statistics.blockBytes += size;
   7320 
   7321     VkDeviceSize lastOffset = 0;
   7322 
   7323     if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
   7324     {
   7325         const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
   7326         size_t nextAlloc2ndIndex = 0;
   7327         while (lastOffset < freeSpace2ndTo1stEnd)
   7328         {
   7329             // Find next non-null allocation or move nextAllocIndex to the end.
   7330             while (nextAlloc2ndIndex < suballoc2ndCount &&
   7331                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
   7332             {
   7333                 ++nextAlloc2ndIndex;
   7334             }
   7335 
   7336             // Found non-null allocation.
   7337             if (nextAlloc2ndIndex < suballoc2ndCount)
   7338             {
   7339                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
   7340 
   7341                 // 1. Process free space before this allocation.
   7342                 if (lastOffset < suballoc.offset)
   7343                 {
   7344                     // There is free space from lastOffset to suballoc.offset.
   7345                     const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
   7346                     VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
   7347                 }
   7348 
   7349                 // 2. Process this allocation.
   7350                 // There is allocation with suballoc.offset, suballoc.size.
   7351                 VmaAddDetailedStatisticsAllocation(inoutStats, suballoc.size);
   7352 
   7353                 // 3. Prepare for next iteration.
   7354                 lastOffset = suballoc.offset + suballoc.size;
   7355                 ++nextAlloc2ndIndex;
   7356             }
   7357             // We are at the end.
   7358             else
   7359             {
   7360                 // There is free space from lastOffset to freeSpace2ndTo1stEnd.
   7361                 if (lastOffset < freeSpace2ndTo1stEnd)
   7362                 {
   7363                     const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
   7364                     VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
   7365                 }
   7366 
   7367                 // End of loop.
   7368                 lastOffset = freeSpace2ndTo1stEnd;
   7369             }
   7370         }
   7371     }
   7372 
   7373     size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
   7374     const VkDeviceSize freeSpace1stTo2ndEnd =
   7375         m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
   7376     while (lastOffset < freeSpace1stTo2ndEnd)
   7377     {
   7378         // Find next non-null allocation or move nextAllocIndex to the end.
   7379         while (nextAlloc1stIndex < suballoc1stCount &&
   7380             suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
   7381         {
   7382             ++nextAlloc1stIndex;
   7383         }
   7384 
   7385         // Found non-null allocation.
   7386         if (nextAlloc1stIndex < suballoc1stCount)
   7387         {
   7388             const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
   7389 
   7390             // 1. Process free space before this allocation.
   7391             if (lastOffset < suballoc.offset)
   7392             {
   7393                 // There is free space from lastOffset to suballoc.offset.
   7394                 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
   7395                 VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
   7396             }
   7397 
   7398             // 2. Process this allocation.
   7399             // There is allocation with suballoc.offset, suballoc.size.
   7400             VmaAddDetailedStatisticsAllocation(inoutStats, suballoc.size);
   7401 
   7402             // 3. Prepare for next iteration.
   7403             lastOffset = suballoc.offset + suballoc.size;
   7404             ++nextAlloc1stIndex;
   7405         }
   7406         // We are at the end.
   7407         else
   7408         {
   7409             // There is free space from lastOffset to freeSpace1stTo2ndEnd.
   7410             if (lastOffset < freeSpace1stTo2ndEnd)
   7411             {
   7412                 const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
   7413                 VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
   7414             }
   7415 
   7416             // End of loop.
   7417             lastOffset = freeSpace1stTo2ndEnd;
   7418         }
   7419     }
   7420 
   7421     if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
   7422     {
   7423         size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
   7424         while (lastOffset < size)
   7425         {
   7426             // Find next non-null allocation or move nextAllocIndex to the end.
   7427             while (nextAlloc2ndIndex != SIZE_MAX &&
   7428                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
   7429             {
   7430                 --nextAlloc2ndIndex;
   7431             }
   7432 
   7433             // Found non-null allocation.
   7434             if (nextAlloc2ndIndex != SIZE_MAX)
   7435             {
   7436                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
   7437 
   7438                 // 1. Process free space before this allocation.
   7439                 if (lastOffset < suballoc.offset)
   7440                 {
   7441                     // There is free space from lastOffset to suballoc.offset.
   7442                     const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
   7443                     VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
   7444                 }
   7445 
   7446                 // 2. Process this allocation.
   7447                 // There is allocation with suballoc.offset, suballoc.size.
   7448                 VmaAddDetailedStatisticsAllocation(inoutStats, suballoc.size);
   7449 
   7450                 // 3. Prepare for next iteration.
   7451                 lastOffset = suballoc.offset + suballoc.size;
   7452                 --nextAlloc2ndIndex;
   7453             }
   7454             // We are at the end.
   7455             else
   7456             {
   7457                 // There is free space from lastOffset to size.
   7458                 if (lastOffset < size)
   7459                 {
   7460                     const VkDeviceSize unusedRangeSize = size - lastOffset;
   7461                     VmaAddDetailedStatisticsUnusedRange(inoutStats, unusedRangeSize);
   7462                 }
   7463 
   7464                 // End of loop.
   7465                 lastOffset = size;
   7466             }
   7467         }
   7468     }
   7469 }
   7470 
   7471 void VmaBlockMetadata_Linear::AddStatistics(VmaStatistics& inoutStats) const
   7472 {
   7473     const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
   7474     const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
   7475     const VkDeviceSize size = GetSize();
   7476     const size_t suballoc1stCount = suballocations1st.size();
   7477     const size_t suballoc2ndCount = suballocations2nd.size();
   7478 
   7479     inoutStats.blockCount++;
   7480     inoutStats.blockBytes += size;
   7481     inoutStats.allocationBytes += size - m_SumFreeSize;
   7482 
   7483     VkDeviceSize lastOffset = 0;
   7484 
   7485     if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
   7486     {
   7487         const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
   7488         size_t nextAlloc2ndIndex = m_1stNullItemsBeginCount;
   7489         while (lastOffset < freeSpace2ndTo1stEnd)
   7490         {
   7491             // Find next non-null allocation or move nextAlloc2ndIndex to the end.
   7492             while (nextAlloc2ndIndex < suballoc2ndCount &&
   7493                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
   7494             {
   7495                 ++nextAlloc2ndIndex;
   7496             }
   7497 
   7498             // Found non-null allocation.
   7499             if (nextAlloc2ndIndex < suballoc2ndCount)
   7500             {
   7501                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
   7502 
   7503                 // Process this allocation.
   7504                 // There is allocation with suballoc.offset, suballoc.size.
   7505                 ++inoutStats.allocationCount;
   7506 
   7507                 // Prepare for next iteration.
   7508                 lastOffset = suballoc.offset + suballoc.size;
   7509                 ++nextAlloc2ndIndex;
   7510             }
   7511             // We are at the end.
   7512             else
   7513             {
   7514                 // End of loop.
   7515                 lastOffset = freeSpace2ndTo1stEnd;
   7516             }
   7517         }
   7518     }
   7519 
   7520     size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
   7521     const VkDeviceSize freeSpace1stTo2ndEnd =
   7522         m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
   7523     while (lastOffset < freeSpace1stTo2ndEnd)
   7524     {
   7525         // Find next non-null allocation or move nextAllocIndex to the end.
   7526         while (nextAlloc1stIndex < suballoc1stCount &&
   7527             suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
   7528         {
   7529             ++nextAlloc1stIndex;
   7530         }
   7531 
   7532         // Found non-null allocation.
   7533         if (nextAlloc1stIndex < suballoc1stCount)
   7534         {
   7535             const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
   7536 
   7537             // Process this allocation.
   7538             // There is allocation with suballoc.offset, suballoc.size.
   7539             ++inoutStats.allocationCount;
   7540 
   7541             // Prepare for next iteration.
   7542             lastOffset = suballoc.offset + suballoc.size;
   7543             ++nextAlloc1stIndex;
   7544         }
   7545         // We are at the end.
   7546         else
   7547         {
   7548             // End of loop.
   7549             lastOffset = freeSpace1stTo2ndEnd;
   7550         }
   7551     }
   7552 
   7553     if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
   7554     {
   7555         size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
   7556         while (lastOffset < size)
   7557         {
   7558             // Find next non-null allocation or move nextAlloc2ndIndex to the end.
   7559             while (nextAlloc2ndIndex != SIZE_MAX &&
   7560                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
   7561             {
   7562                 --nextAlloc2ndIndex;
   7563             }
   7564 
   7565             // Found non-null allocation.
   7566             if (nextAlloc2ndIndex != SIZE_MAX)
   7567             {
   7568                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
   7569 
   7570                 // Process this allocation.
   7571                 // There is allocation with suballoc.offset, suballoc.size.
   7572                 ++inoutStats.allocationCount;
   7573 
   7574                 // Prepare for next iteration.
   7575                 lastOffset = suballoc.offset + suballoc.size;
   7576                 --nextAlloc2ndIndex;
   7577             }
   7578             // We are at the end.
   7579             else
   7580             {
   7581                 // End of loop.
   7582                 lastOffset = size;
   7583             }
   7584         }
   7585     }
   7586 }
   7587 
   7588 #if VMA_STATS_STRING_ENABLED
   7589 void VmaBlockMetadata_Linear::PrintDetailedMap(class VmaJsonWriter& json) const
   7590 {
   7591     const VkDeviceSize size = GetSize();
   7592     const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
   7593     const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
   7594     const size_t suballoc1stCount = suballocations1st.size();
   7595     const size_t suballoc2ndCount = suballocations2nd.size();
   7596 
   7597     // FIRST PASS
   7598 
   7599     size_t unusedRangeCount = 0;
   7600     VkDeviceSize usedBytes = 0;
   7601 
   7602     VkDeviceSize lastOffset = 0;
   7603 
   7604     size_t alloc2ndCount = 0;
   7605     if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
   7606     {
   7607         const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
   7608         size_t nextAlloc2ndIndex = 0;
   7609         while (lastOffset < freeSpace2ndTo1stEnd)
   7610         {
   7611             // Find next non-null allocation or move nextAlloc2ndIndex to the end.
   7612             while (nextAlloc2ndIndex < suballoc2ndCount &&
   7613                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
   7614             {
   7615                 ++nextAlloc2ndIndex;
   7616             }
   7617 
   7618             // Found non-null allocation.
   7619             if (nextAlloc2ndIndex < suballoc2ndCount)
   7620             {
   7621                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
   7622 
   7623                 // 1. Process free space before this allocation.
   7624                 if (lastOffset < suballoc.offset)
   7625                 {
   7626                     // There is free space from lastOffset to suballoc.offset.
   7627                     ++unusedRangeCount;
   7628                 }
   7629 
   7630                 // 2. Process this allocation.
   7631                 // There is allocation with suballoc.offset, suballoc.size.
   7632                 ++alloc2ndCount;
   7633                 usedBytes += suballoc.size;
   7634 
   7635                 // 3. Prepare for next iteration.
   7636                 lastOffset = suballoc.offset + suballoc.size;
   7637                 ++nextAlloc2ndIndex;
   7638             }
   7639             // We are at the end.
   7640             else
   7641             {
   7642                 if (lastOffset < freeSpace2ndTo1stEnd)
   7643                 {
   7644                     // There is free space from lastOffset to freeSpace2ndTo1stEnd.
   7645                     ++unusedRangeCount;
   7646                 }
   7647 
   7648                 // End of loop.
   7649                 lastOffset = freeSpace2ndTo1stEnd;
   7650             }
   7651         }
   7652     }
   7653 
   7654     size_t nextAlloc1stIndex = m_1stNullItemsBeginCount;
   7655     size_t alloc1stCount = 0;
   7656     const VkDeviceSize freeSpace1stTo2ndEnd =
   7657         m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ? suballocations2nd.back().offset : size;
   7658     while (lastOffset < freeSpace1stTo2ndEnd)
   7659     {
   7660         // Find next non-null allocation or move nextAllocIndex to the end.
   7661         while (nextAlloc1stIndex < suballoc1stCount &&
   7662             suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
   7663         {
   7664             ++nextAlloc1stIndex;
   7665         }
   7666 
   7667         // Found non-null allocation.
   7668         if (nextAlloc1stIndex < suballoc1stCount)
   7669         {
   7670             const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
   7671 
   7672             // 1. Process free space before this allocation.
   7673             if (lastOffset < suballoc.offset)
   7674             {
   7675                 // There is free space from lastOffset to suballoc.offset.
   7676                 ++unusedRangeCount;
   7677             }
   7678 
   7679             // 2. Process this allocation.
   7680             // There is allocation with suballoc.offset, suballoc.size.
   7681             ++alloc1stCount;
   7682             usedBytes += suballoc.size;
   7683 
   7684             // 3. Prepare for next iteration.
   7685             lastOffset = suballoc.offset + suballoc.size;
   7686             ++nextAlloc1stIndex;
   7687         }
   7688         // We are at the end.
   7689         else
   7690         {
   7691             if (lastOffset < freeSpace1stTo2ndEnd)
   7692             {
   7693                 // There is free space from lastOffset to freeSpace1stTo2ndEnd.
   7694                 ++unusedRangeCount;
   7695             }
   7696 
   7697             // End of loop.
   7698             lastOffset = freeSpace1stTo2ndEnd;
   7699         }
   7700     }
   7701 
   7702     if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
   7703     {
   7704         size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
   7705         while (lastOffset < size)
   7706         {
   7707             // Find next non-null allocation or move nextAlloc2ndIndex to the end.
   7708             while (nextAlloc2ndIndex != SIZE_MAX &&
   7709                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
   7710             {
   7711                 --nextAlloc2ndIndex;
   7712             }
   7713 
   7714             // Found non-null allocation.
   7715             if (nextAlloc2ndIndex != SIZE_MAX)
   7716             {
   7717                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
   7718 
   7719                 // 1. Process free space before this allocation.
   7720                 if (lastOffset < suballoc.offset)
   7721                 {
   7722                     // There is free space from lastOffset to suballoc.offset.
   7723                     ++unusedRangeCount;
   7724                 }
   7725 
   7726                 // 2. Process this allocation.
   7727                 // There is allocation with suballoc.offset, suballoc.size.
   7728                 ++alloc2ndCount;
   7729                 usedBytes += suballoc.size;
   7730 
   7731                 // 3. Prepare for next iteration.
   7732                 lastOffset = suballoc.offset + suballoc.size;
   7733                 --nextAlloc2ndIndex;
   7734             }
   7735             // We are at the end.
   7736             else
   7737             {
   7738                 if (lastOffset < size)
   7739                 {
   7740                     // There is free space from lastOffset to size.
   7741                     ++unusedRangeCount;
   7742                 }
   7743 
   7744                 // End of loop.
   7745                 lastOffset = size;
   7746             }
   7747         }
   7748     }
   7749 
   7750     const VkDeviceSize unusedBytes = size - usedBytes;
   7751     PrintDetailedMap_Begin(json, unusedBytes, alloc1stCount + alloc2ndCount, unusedRangeCount);
   7752 
   7753     // SECOND PASS
   7754     lastOffset = 0;
   7755 
   7756     if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
   7757     {
   7758         const VkDeviceSize freeSpace2ndTo1stEnd = suballocations1st[m_1stNullItemsBeginCount].offset;
   7759         size_t nextAlloc2ndIndex = 0;
   7760         while (lastOffset < freeSpace2ndTo1stEnd)
   7761         {
   7762             // Find next non-null allocation or move nextAlloc2ndIndex to the end.
   7763             while (nextAlloc2ndIndex < suballoc2ndCount &&
   7764                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
   7765             {
   7766                 ++nextAlloc2ndIndex;
   7767             }
   7768 
   7769             // Found non-null allocation.
   7770             if (nextAlloc2ndIndex < suballoc2ndCount)
   7771             {
   7772                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
   7773 
   7774                 // 1. Process free space before this allocation.
   7775                 if (lastOffset < suballoc.offset)
   7776                 {
   7777                     // There is free space from lastOffset to suballoc.offset.
   7778                     const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
   7779                     PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
   7780                 }
   7781 
   7782                 // 2. Process this allocation.
   7783                 // There is allocation with suballoc.offset, suballoc.size.
   7784                 PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.size, suballoc.userData);
   7785 
   7786                 // 3. Prepare for next iteration.
   7787                 lastOffset = suballoc.offset + suballoc.size;
   7788                 ++nextAlloc2ndIndex;
   7789             }
   7790             // We are at the end.
   7791             else
   7792             {
   7793                 if (lastOffset < freeSpace2ndTo1stEnd)
   7794                 {
   7795                     // There is free space from lastOffset to freeSpace2ndTo1stEnd.
   7796                     const VkDeviceSize unusedRangeSize = freeSpace2ndTo1stEnd - lastOffset;
   7797                     PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
   7798                 }
   7799 
   7800                 // End of loop.
   7801                 lastOffset = freeSpace2ndTo1stEnd;
   7802             }
   7803         }
   7804     }
   7805 
   7806     nextAlloc1stIndex = m_1stNullItemsBeginCount;
   7807     while (lastOffset < freeSpace1stTo2ndEnd)
   7808     {
   7809         // Find next non-null allocation or move nextAllocIndex to the end.
   7810         while (nextAlloc1stIndex < suballoc1stCount &&
   7811             suballocations1st[nextAlloc1stIndex].userData == VMA_NULL)
   7812         {
   7813             ++nextAlloc1stIndex;
   7814         }
   7815 
   7816         // Found non-null allocation.
   7817         if (nextAlloc1stIndex < suballoc1stCount)
   7818         {
   7819             const VmaSuballocation& suballoc = suballocations1st[nextAlloc1stIndex];
   7820 
   7821             // 1. Process free space before this allocation.
   7822             if (lastOffset < suballoc.offset)
   7823             {
   7824                 // There is free space from lastOffset to suballoc.offset.
   7825                 const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
   7826                 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
   7827             }
   7828 
   7829             // 2. Process this allocation.
   7830             // There is allocation with suballoc.offset, suballoc.size.
   7831             PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.size, suballoc.userData);
   7832 
   7833             // 3. Prepare for next iteration.
   7834             lastOffset = suballoc.offset + suballoc.size;
   7835             ++nextAlloc1stIndex;
   7836         }
   7837         // We are at the end.
   7838         else
   7839         {
   7840             if (lastOffset < freeSpace1stTo2ndEnd)
   7841             {
   7842                 // There is free space from lastOffset to freeSpace1stTo2ndEnd.
   7843                 const VkDeviceSize unusedRangeSize = freeSpace1stTo2ndEnd - lastOffset;
   7844                 PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
   7845             }
   7846 
   7847             // End of loop.
   7848             lastOffset = freeSpace1stTo2ndEnd;
   7849         }
   7850     }
   7851 
   7852     if (m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
   7853     {
   7854         size_t nextAlloc2ndIndex = suballocations2nd.size() - 1;
   7855         while (lastOffset < size)
   7856         {
   7857             // Find next non-null allocation or move nextAlloc2ndIndex to the end.
   7858             while (nextAlloc2ndIndex != SIZE_MAX &&
   7859                 suballocations2nd[nextAlloc2ndIndex].userData == VMA_NULL)
   7860             {
   7861                 --nextAlloc2ndIndex;
   7862             }
   7863 
   7864             // Found non-null allocation.
   7865             if (nextAlloc2ndIndex != SIZE_MAX)
   7866             {
   7867                 const VmaSuballocation& suballoc = suballocations2nd[nextAlloc2ndIndex];
   7868 
   7869                 // 1. Process free space before this allocation.
   7870                 if (lastOffset < suballoc.offset)
   7871                 {
   7872                     // There is free space from lastOffset to suballoc.offset.
   7873                     const VkDeviceSize unusedRangeSize = suballoc.offset - lastOffset;
   7874                     PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
   7875                 }
   7876 
   7877                 // 2. Process this allocation.
   7878                 // There is allocation with suballoc.offset, suballoc.size.
   7879                 PrintDetailedMap_Allocation(json, suballoc.offset, suballoc.size, suballoc.userData);
   7880 
   7881                 // 3. Prepare for next iteration.
   7882                 lastOffset = suballoc.offset + suballoc.size;
   7883                 --nextAlloc2ndIndex;
   7884             }
   7885             // We are at the end.
   7886             else
   7887             {
   7888                 if (lastOffset < size)
   7889                 {
   7890                     // There is free space from lastOffset to size.
   7891                     const VkDeviceSize unusedRangeSize = size - lastOffset;
   7892                     PrintDetailedMap_UnusedRange(json, lastOffset, unusedRangeSize);
   7893                 }
   7894 
   7895                 // End of loop.
   7896                 lastOffset = size;
   7897             }
   7898         }
   7899     }
   7900 
   7901     PrintDetailedMap_End(json);
   7902 }
   7903 #endif // VMA_STATS_STRING_ENABLED
   7904 
   7905 bool VmaBlockMetadata_Linear::CreateAllocationRequest(
   7906     VkDeviceSize allocSize,
   7907     VkDeviceSize allocAlignment,
   7908     bool upperAddress,
   7909     VmaSuballocationType allocType,
   7910     uint32_t strategy,
   7911     VmaAllocationRequest* pAllocationRequest)
   7912 {
   7913     VMA_ASSERT(allocSize > 0);
   7914     VMA_ASSERT(allocType != VMA_SUBALLOCATION_TYPE_FREE);
   7915     VMA_ASSERT(pAllocationRequest != VMA_NULL);
   7916     VMA_HEAVY_ASSERT(Validate());
   7917 
   7918     if(allocSize > GetSize())
   7919         return false;
   7920 
   7921     pAllocationRequest->size = allocSize;
   7922     return upperAddress ?
   7923         CreateAllocationRequest_UpperAddress(
   7924             allocSize, allocAlignment, allocType, strategy, pAllocationRequest) :
   7925         CreateAllocationRequest_LowerAddress(
   7926             allocSize, allocAlignment, allocType, strategy, pAllocationRequest);
   7927 }
   7928 
   7929 VkResult VmaBlockMetadata_Linear::CheckCorruption(const void* pBlockData)
   7930 {
   7931     VMA_ASSERT(!IsVirtual());
   7932     SuballocationVectorType& suballocations1st = AccessSuballocations1st();
   7933     for (size_t i = m_1stNullItemsBeginCount, count = suballocations1st.size(); i < count; ++i)
   7934     {
   7935         const VmaSuballocation& suballoc = suballocations1st[i];
   7936         if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
   7937         {
   7938             if (!VmaValidateMagicValue(pBlockData, suballoc.offset + suballoc.size))
   7939             {
   7940                 VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
   7941                 return VK_ERROR_UNKNOWN_COPY;
   7942             }
   7943         }
   7944     }
   7945 
   7946     SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
   7947     for (size_t i = 0, count = suballocations2nd.size(); i < count; ++i)
   7948     {
   7949         const VmaSuballocation& suballoc = suballocations2nd[i];
   7950         if (suballoc.type != VMA_SUBALLOCATION_TYPE_FREE)
   7951         {
   7952             if (!VmaValidateMagicValue(pBlockData, suballoc.offset + suballoc.size))
   7953             {
   7954                 VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
   7955                 return VK_ERROR_UNKNOWN_COPY;
   7956             }
   7957         }
   7958     }
   7959 
   7960     return VK_SUCCESS;
   7961 }
   7962 
   7963 void VmaBlockMetadata_Linear::Alloc(
   7964     const VmaAllocationRequest& request,
   7965     VmaSuballocationType type,
   7966     void* userData)
   7967 {
   7968     const VkDeviceSize offset = (VkDeviceSize)request.allocHandle - 1;
   7969     const VmaSuballocation newSuballoc = { offset, request.size, userData, type };
   7970 
   7971     switch (request.type)
   7972     {
   7973     case VmaAllocationRequestType::UpperAddress:
   7974     {
   7975         VMA_ASSERT(m_2ndVectorMode != SECOND_VECTOR_RING_BUFFER &&
   7976             "CRITICAL ERROR: Trying to use linear allocator as double stack while it was already used as ring buffer.");
   7977         SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
   7978         suballocations2nd.push_back(newSuballoc);
   7979         m_2ndVectorMode = SECOND_VECTOR_DOUBLE_STACK;
   7980     }
   7981     break;
   7982     case VmaAllocationRequestType::EndOf1st:
   7983     {
   7984         SuballocationVectorType& suballocations1st = AccessSuballocations1st();
   7985 
   7986         VMA_ASSERT(suballocations1st.empty() ||
   7987             offset >= suballocations1st.back().offset + suballocations1st.back().size);
   7988         // Check if it fits before the end of the block.
   7989         VMA_ASSERT(offset + request.size <= GetSize());
   7990 
   7991         suballocations1st.push_back(newSuballoc);
   7992     }
   7993     break;
   7994     case VmaAllocationRequestType::EndOf2nd:
   7995     {
   7996         SuballocationVectorType& suballocations1st = AccessSuballocations1st();
   7997         // New allocation at the end of 2-part ring buffer, so before first allocation from 1st vector.
   7998         VMA_ASSERT(!suballocations1st.empty() &&
   7999             offset + request.size <= suballocations1st[m_1stNullItemsBeginCount].offset);
   8000         SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
   8001 
   8002         switch (m_2ndVectorMode)
   8003         {
   8004         case SECOND_VECTOR_EMPTY:
   8005             // First allocation from second part ring buffer.
   8006             VMA_ASSERT(suballocations2nd.empty());
   8007             m_2ndVectorMode = SECOND_VECTOR_RING_BUFFER;
   8008             break;
   8009         case SECOND_VECTOR_RING_BUFFER:
   8010             // 2-part ring buffer is already started.
   8011             VMA_ASSERT(!suballocations2nd.empty());
   8012             break;
   8013         case SECOND_VECTOR_DOUBLE_STACK:
   8014             VMA_ASSERT(0 && "CRITICAL ERROR: Trying to use linear allocator as ring buffer while it was already used as double stack.");
   8015             break;
   8016         default:
   8017             VMA_ASSERT(0);
   8018         }
   8019 
   8020         suballocations2nd.push_back(newSuballoc);
   8021     }
   8022     break;
   8023     default:
   8024         VMA_ASSERT(0 && "CRITICAL INTERNAL ERROR.");
   8025     }
   8026 
   8027     m_SumFreeSize -= newSuballoc.size;
   8028 }
   8029 
   8030 void VmaBlockMetadata_Linear::Free(VmaAllocHandle allocHandle)
   8031 {
   8032     SuballocationVectorType& suballocations1st = AccessSuballocations1st();
   8033     SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
   8034     VkDeviceSize offset = (VkDeviceSize)allocHandle - 1;
   8035 
   8036     if (!suballocations1st.empty())
   8037     {
   8038         // First allocation: Mark it as next empty at the beginning.
   8039         VmaSuballocation& firstSuballoc = suballocations1st[m_1stNullItemsBeginCount];
   8040         if (firstSuballoc.offset == offset)
   8041         {
   8042             firstSuballoc.type = VMA_SUBALLOCATION_TYPE_FREE;
   8043             firstSuballoc.userData = VMA_NULL;
   8044             m_SumFreeSize += firstSuballoc.size;
   8045             ++m_1stNullItemsBeginCount;
   8046             CleanupAfterFree();
   8047             return;
   8048         }
   8049     }
   8050 
   8051     // Last allocation in 2-part ring buffer or top of upper stack (same logic).
   8052     if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ||
   8053         m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
   8054     {
   8055         VmaSuballocation& lastSuballoc = suballocations2nd.back();
   8056         if (lastSuballoc.offset == offset)
   8057         {
   8058             m_SumFreeSize += lastSuballoc.size;
   8059             suballocations2nd.pop_back();
   8060             CleanupAfterFree();
   8061             return;
   8062         }
   8063     }
   8064     // Last allocation in 1st vector.
   8065     else if (m_2ndVectorMode == SECOND_VECTOR_EMPTY)
   8066     {
   8067         VmaSuballocation& lastSuballoc = suballocations1st.back();
   8068         if (lastSuballoc.offset == offset)
   8069         {
   8070             m_SumFreeSize += lastSuballoc.size;
   8071             suballocations1st.pop_back();
   8072             CleanupAfterFree();
   8073             return;
   8074         }
   8075     }
   8076 
   8077     VmaSuballocation refSuballoc;
   8078     refSuballoc.offset = offset;
   8079     // Rest of members stays uninitialized intentionally for better performance.
   8080 
   8081     // Item from the middle of 1st vector.
   8082     {
   8083         const SuballocationVectorType::iterator it = VmaBinaryFindSorted(
   8084             suballocations1st.begin() + m_1stNullItemsBeginCount,
   8085             suballocations1st.end(),
   8086             refSuballoc,
   8087             VmaSuballocationOffsetLess());
   8088         if (it != suballocations1st.end())
   8089         {
   8090             it->type = VMA_SUBALLOCATION_TYPE_FREE;
   8091             it->userData = VMA_NULL;
   8092             ++m_1stNullItemsMiddleCount;
   8093             m_SumFreeSize += it->size;
   8094             CleanupAfterFree();
   8095             return;
   8096         }
   8097     }
   8098 
   8099     if (m_2ndVectorMode != SECOND_VECTOR_EMPTY)
   8100     {
   8101         // Item from the middle of 2nd vector.
   8102         const SuballocationVectorType::iterator it = m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ?
   8103             VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetLess()) :
   8104             VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetGreater());
   8105         if (it != suballocations2nd.end())
   8106         {
   8107             it->type = VMA_SUBALLOCATION_TYPE_FREE;
   8108             it->userData = VMA_NULL;
   8109             ++m_2ndNullItemsCount;
   8110             m_SumFreeSize += it->size;
   8111             CleanupAfterFree();
   8112             return;
   8113         }
   8114     }
   8115 
   8116     VMA_ASSERT(0 && "Allocation to free not found in linear allocator!");
   8117 }
   8118 
   8119 void VmaBlockMetadata_Linear::GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo)
   8120 {
   8121     outInfo.offset = (VkDeviceSize)allocHandle - 1;
   8122     VmaSuballocation& suballoc = FindSuballocation(outInfo.offset);
   8123     outInfo.size = suballoc.size;
   8124     outInfo.pUserData = suballoc.userData;
   8125 }
   8126 
   8127 void* VmaBlockMetadata_Linear::GetAllocationUserData(VmaAllocHandle allocHandle) const
   8128 {
   8129     return FindSuballocation((VkDeviceSize)allocHandle - 1).userData;
   8130 }
   8131 
   8132 VmaAllocHandle VmaBlockMetadata_Linear::GetAllocationListBegin() const
   8133 {
   8134     // Function only used for defragmentation, which is disabled for this algorithm
   8135     VMA_ASSERT(0);
   8136     return VK_NULL_HANDLE;
   8137 }
   8138 
   8139 VmaAllocHandle VmaBlockMetadata_Linear::GetNextAllocation(VmaAllocHandle prevAlloc) const
   8140 {
   8141     // Function only used for defragmentation, which is disabled for this algorithm
   8142     VMA_ASSERT(0);
   8143     return VK_NULL_HANDLE;
   8144 }
   8145 
   8146 VkDeviceSize VmaBlockMetadata_Linear::GetNextFreeRegionSize(VmaAllocHandle alloc) const
   8147 {
   8148     // Function only used for defragmentation, which is disabled for this algorithm
   8149     VMA_ASSERT(0);
   8150     return 0;
   8151 }
   8152 
   8153 void VmaBlockMetadata_Linear::Clear()
   8154 {
   8155     m_SumFreeSize = GetSize();
   8156     m_Suballocations0.clear();
   8157     m_Suballocations1.clear();
   8158     // Leaving m_1stVectorIndex unchanged - it doesn't matter.
   8159     m_2ndVectorMode = SECOND_VECTOR_EMPTY;
   8160     m_1stNullItemsBeginCount = 0;
   8161     m_1stNullItemsMiddleCount = 0;
   8162     m_2ndNullItemsCount = 0;
   8163 }
   8164 
   8165 void VmaBlockMetadata_Linear::SetAllocationUserData(VmaAllocHandle allocHandle, void* userData)
   8166 {
   8167     VmaSuballocation& suballoc = FindSuballocation((VkDeviceSize)allocHandle - 1);
   8168     suballoc.userData = userData;
   8169 }
   8170 
   8171 void VmaBlockMetadata_Linear::DebugLogAllAllocations() const
   8172 {
   8173     const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
   8174     for (auto it = suballocations1st.begin() + m_1stNullItemsBeginCount; it != suballocations1st.end(); ++it)
   8175         if (it->type != VMA_SUBALLOCATION_TYPE_FREE)
   8176             DebugLogAllocation(it->offset, it->size, it->userData);
   8177 
   8178     const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
   8179     for (auto it = suballocations2nd.begin(); it != suballocations2nd.end(); ++it)
   8180         if (it->type != VMA_SUBALLOCATION_TYPE_FREE)
   8181             DebugLogAllocation(it->offset, it->size, it->userData);
   8182 }
   8183 
   8184 VmaSuballocation& VmaBlockMetadata_Linear::FindSuballocation(VkDeviceSize offset) const
   8185 {
   8186     const SuballocationVectorType& suballocations1st = AccessSuballocations1st();
   8187     const SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
   8188 
   8189     VmaSuballocation refSuballoc;
   8190     refSuballoc.offset = offset;
   8191     // Rest of members stays uninitialized intentionally for better performance.
   8192 
   8193     // Item from the 1st vector.
   8194     {
   8195         SuballocationVectorType::const_iterator it = VmaBinaryFindSorted(
   8196             suballocations1st.begin() + m_1stNullItemsBeginCount,
   8197             suballocations1st.end(),
   8198             refSuballoc,
   8199             VmaSuballocationOffsetLess());
   8200         if (it != suballocations1st.end())
   8201         {
   8202             return const_cast<VmaSuballocation&>(*it);
   8203         }
   8204     }
   8205 
   8206     if (m_2ndVectorMode != SECOND_VECTOR_EMPTY)
   8207     {
   8208         // Rest of members stays uninitialized intentionally for better performance.
   8209         SuballocationVectorType::const_iterator it = m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER ?
   8210             VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetLess()) :
   8211             VmaBinaryFindSorted(suballocations2nd.begin(), suballocations2nd.end(), refSuballoc, VmaSuballocationOffsetGreater());
   8212         if (it != suballocations2nd.end())
   8213         {
   8214             return const_cast<VmaSuballocation&>(*it);
   8215         }
   8216     }
   8217 
   8218     VMA_ASSERT(0 && "Allocation not found in linear allocator!");
   8219     return const_cast<VmaSuballocation&>(suballocations1st.back()); // Should never occur.
   8220 }
   8221 
   8222 bool VmaBlockMetadata_Linear::ShouldCompact1st() const
   8223 {
   8224     const size_t nullItemCount = m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount;
   8225     const size_t suballocCount = AccessSuballocations1st().size();
   8226     return suballocCount > 32 && nullItemCount * 2 >= (suballocCount - nullItemCount) * 3;
   8227 }
   8228 
   8229 void VmaBlockMetadata_Linear::CleanupAfterFree()
   8230 {
   8231     SuballocationVectorType& suballocations1st = AccessSuballocations1st();
   8232     SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
   8233 
   8234     if (IsEmpty())
   8235     {
   8236         suballocations1st.clear();
   8237         suballocations2nd.clear();
   8238         m_1stNullItemsBeginCount = 0;
   8239         m_1stNullItemsMiddleCount = 0;
   8240         m_2ndNullItemsCount = 0;
   8241         m_2ndVectorMode = SECOND_VECTOR_EMPTY;
   8242     }
   8243     else
   8244     {
   8245         const size_t suballoc1stCount = suballocations1st.size();
   8246         const size_t nullItem1stCount = m_1stNullItemsBeginCount + m_1stNullItemsMiddleCount;
   8247         VMA_ASSERT(nullItem1stCount <= suballoc1stCount);
   8248 
   8249         // Find more null items at the beginning of 1st vector.
   8250         while (m_1stNullItemsBeginCount < suballoc1stCount &&
   8251             suballocations1st[m_1stNullItemsBeginCount].type == VMA_SUBALLOCATION_TYPE_FREE)
   8252         {
   8253             ++m_1stNullItemsBeginCount;
   8254             --m_1stNullItemsMiddleCount;
   8255         }
   8256 
   8257         // Find more null items at the end of 1st vector.
   8258         while (m_1stNullItemsMiddleCount > 0 &&
   8259             suballocations1st.back().type == VMA_SUBALLOCATION_TYPE_FREE)
   8260         {
   8261             --m_1stNullItemsMiddleCount;
   8262             suballocations1st.pop_back();
   8263         }
   8264 
   8265         // Find more null items at the end of 2nd vector.
   8266         while (m_2ndNullItemsCount > 0 &&
   8267             suballocations2nd.back().type == VMA_SUBALLOCATION_TYPE_FREE)
   8268         {
   8269             --m_2ndNullItemsCount;
   8270             suballocations2nd.pop_back();
   8271         }
   8272 
   8273         // Find more null items at the beginning of 2nd vector.
   8274         while (m_2ndNullItemsCount > 0 &&
   8275             suballocations2nd[0].type == VMA_SUBALLOCATION_TYPE_FREE)
   8276         {
   8277             --m_2ndNullItemsCount;
   8278             VmaVectorRemove(suballocations2nd, 0);
   8279         }
   8280 
   8281         if (ShouldCompact1st())
   8282         {
   8283             const size_t nonNullItemCount = suballoc1stCount - nullItem1stCount;
   8284             size_t srcIndex = m_1stNullItemsBeginCount;
   8285             for (size_t dstIndex = 0; dstIndex < nonNullItemCount; ++dstIndex)
   8286             {
   8287                 while (suballocations1st[srcIndex].type == VMA_SUBALLOCATION_TYPE_FREE)
   8288                 {
   8289                     ++srcIndex;
   8290                 }
   8291                 if (dstIndex != srcIndex)
   8292                 {
   8293                     suballocations1st[dstIndex] = suballocations1st[srcIndex];
   8294                 }
   8295                 ++srcIndex;
   8296             }
   8297             suballocations1st.resize(nonNullItemCount);
   8298             m_1stNullItemsBeginCount = 0;
   8299             m_1stNullItemsMiddleCount = 0;
   8300         }
   8301 
   8302         // 2nd vector became empty.
   8303         if (suballocations2nd.empty())
   8304         {
   8305             m_2ndVectorMode = SECOND_VECTOR_EMPTY;
   8306         }
   8307 
   8308         // 1st vector became empty.
   8309         if (suballocations1st.size() - m_1stNullItemsBeginCount == 0)
   8310         {
   8311             suballocations1st.clear();
   8312             m_1stNullItemsBeginCount = 0;
   8313 
   8314             if (!suballocations2nd.empty() && m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
   8315             {
   8316                 // Swap 1st with 2nd. Now 2nd is empty.
   8317                 m_2ndVectorMode = SECOND_VECTOR_EMPTY;
   8318                 m_1stNullItemsMiddleCount = m_2ndNullItemsCount;
   8319                 while (m_1stNullItemsBeginCount < suballocations2nd.size() &&
   8320                     suballocations2nd[m_1stNullItemsBeginCount].type == VMA_SUBALLOCATION_TYPE_FREE)
   8321                 {
   8322                     ++m_1stNullItemsBeginCount;
   8323                     --m_1stNullItemsMiddleCount;
   8324                 }
   8325                 m_2ndNullItemsCount = 0;
   8326                 m_1stVectorIndex ^= 1;
   8327             }
   8328         }
   8329     }
   8330 
   8331     VMA_HEAVY_ASSERT(Validate());
   8332 }
   8333 
   8334 bool VmaBlockMetadata_Linear::CreateAllocationRequest_LowerAddress(
   8335     VkDeviceSize allocSize,
   8336     VkDeviceSize allocAlignment,
   8337     VmaSuballocationType allocType,
   8338     uint32_t strategy,
   8339     VmaAllocationRequest* pAllocationRequest)
   8340 {
   8341     const VkDeviceSize blockSize = GetSize();
   8342     const VkDeviceSize debugMargin = GetDebugMargin();
   8343     const VkDeviceSize bufferImageGranularity = GetBufferImageGranularity();
   8344     SuballocationVectorType& suballocations1st = AccessSuballocations1st();
   8345     SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
   8346 
   8347     if (m_2ndVectorMode == SECOND_VECTOR_EMPTY || m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
   8348     {
   8349         // Try to allocate at the end of 1st vector.
   8350 
   8351         VkDeviceSize resultBaseOffset = 0;
   8352         if (!suballocations1st.empty())
   8353         {
   8354             const VmaSuballocation& lastSuballoc = suballocations1st.back();
   8355             resultBaseOffset = lastSuballoc.offset + lastSuballoc.size + debugMargin;
   8356         }
   8357 
   8358         // Start from offset equal to beginning of free space.
   8359         VkDeviceSize resultOffset = resultBaseOffset;
   8360 
   8361         // Apply alignment.
   8362         resultOffset = VmaAlignUp(resultOffset, allocAlignment);
   8363 
   8364         // Check previous suballocations for BufferImageGranularity conflicts.
   8365         // Make bigger alignment if necessary.
   8366         if (bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations1st.empty())
   8367         {
   8368             bool bufferImageGranularityConflict = false;
   8369             for (size_t prevSuballocIndex = suballocations1st.size(); prevSuballocIndex--; )
   8370             {
   8371                 const VmaSuballocation& prevSuballoc = suballocations1st[prevSuballocIndex];
   8372                 if (VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
   8373                 {
   8374                     if (VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
   8375                     {
   8376                         bufferImageGranularityConflict = true;
   8377                         break;
   8378                     }
   8379                 }
   8380                 else
   8381                     // Already on previous page.
   8382                     break;
   8383             }
   8384             if (bufferImageGranularityConflict)
   8385             {
   8386                 resultOffset = VmaAlignUp(resultOffset, bufferImageGranularity);
   8387             }
   8388         }
   8389 
   8390         const VkDeviceSize freeSpaceEnd = m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK ?
   8391             suballocations2nd.back().offset : blockSize;
   8392 
   8393         // There is enough free space at the end after alignment.
   8394         if (resultOffset + allocSize + debugMargin <= freeSpaceEnd)
   8395         {
   8396             // Check next suballocations for BufferImageGranularity conflicts.
   8397             // If conflict exists, allocation cannot be made here.
   8398             if ((allocSize % bufferImageGranularity || resultOffset % bufferImageGranularity) && m_2ndVectorMode == SECOND_VECTOR_DOUBLE_STACK)
   8399             {
   8400                 for (size_t nextSuballocIndex = suballocations2nd.size(); nextSuballocIndex--; )
   8401                 {
   8402                     const VmaSuballocation& nextSuballoc = suballocations2nd[nextSuballocIndex];
   8403                     if (VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
   8404                     {
   8405                         if (VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
   8406                         {
   8407                             return false;
   8408                         }
   8409                     }
   8410                     else
   8411                     {
   8412                         // Already on previous page.
   8413                         break;
   8414                     }
   8415                 }
   8416             }
   8417 
   8418             // All tests passed: Success.
   8419             pAllocationRequest->allocHandle = (VmaAllocHandle)(resultOffset + 1);
   8420             // pAllocationRequest->item, customData unused.
   8421             pAllocationRequest->type = VmaAllocationRequestType::EndOf1st;
   8422             return true;
   8423         }
   8424     }
   8425 
   8426     // Wrap-around to end of 2nd vector. Try to allocate there, watching for the
   8427     // beginning of 1st vector as the end of free space.
   8428     if (m_2ndVectorMode == SECOND_VECTOR_EMPTY || m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
   8429     {
   8430         VMA_ASSERT(!suballocations1st.empty());
   8431 
   8432         VkDeviceSize resultBaseOffset = 0;
   8433         if (!suballocations2nd.empty())
   8434         {
   8435             const VmaSuballocation& lastSuballoc = suballocations2nd.back();
   8436             resultBaseOffset = lastSuballoc.offset + lastSuballoc.size + debugMargin;
   8437         }
   8438 
   8439         // Start from offset equal to beginning of free space.
   8440         VkDeviceSize resultOffset = resultBaseOffset;
   8441 
   8442         // Apply alignment.
   8443         resultOffset = VmaAlignUp(resultOffset, allocAlignment);
   8444 
   8445         // Check previous suballocations for BufferImageGranularity conflicts.
   8446         // Make bigger alignment if necessary.
   8447         if (bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations2nd.empty())
   8448         {
   8449             bool bufferImageGranularityConflict = false;
   8450             for (size_t prevSuballocIndex = suballocations2nd.size(); prevSuballocIndex--; )
   8451             {
   8452                 const VmaSuballocation& prevSuballoc = suballocations2nd[prevSuballocIndex];
   8453                 if (VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
   8454                 {
   8455                     if (VmaIsBufferImageGranularityConflict(prevSuballoc.type, allocType))
   8456                     {
   8457                         bufferImageGranularityConflict = true;
   8458                         break;
   8459                     }
   8460                 }
   8461                 else
   8462                     // Already on previous page.
   8463                     break;
   8464             }
   8465             if (bufferImageGranularityConflict)
   8466             {
   8467                 resultOffset = VmaAlignUp(resultOffset, bufferImageGranularity);
   8468             }
   8469         }
   8470 
   8471         size_t index1st = m_1stNullItemsBeginCount;
   8472 
   8473         // There is enough free space at the end after alignment.
   8474         if ((index1st == suballocations1st.size() && resultOffset + allocSize + debugMargin <= blockSize) ||
   8475             (index1st < suballocations1st.size() && resultOffset + allocSize + debugMargin <= suballocations1st[index1st].offset))
   8476         {
   8477             // Check next suballocations for BufferImageGranularity conflicts.
   8478             // If conflict exists, allocation cannot be made here.
   8479             if (allocSize % bufferImageGranularity || resultOffset % bufferImageGranularity)
   8480             {
   8481                 for (size_t nextSuballocIndex = index1st;
   8482                     nextSuballocIndex < suballocations1st.size();
   8483                     nextSuballocIndex++)
   8484                 {
   8485                     const VmaSuballocation& nextSuballoc = suballocations1st[nextSuballocIndex];
   8486                     if (VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
   8487                     {
   8488                         if (VmaIsBufferImageGranularityConflict(allocType, nextSuballoc.type))
   8489                         {
   8490                             return false;
   8491                         }
   8492                     }
   8493                     else
   8494                     {
   8495                         // Already on next page.
   8496                         break;
   8497                     }
   8498                 }
   8499             }
   8500 
   8501             // All tests passed: Success.
   8502             pAllocationRequest->allocHandle = (VmaAllocHandle)(resultOffset + 1);
   8503             pAllocationRequest->type = VmaAllocationRequestType::EndOf2nd;
   8504             // pAllocationRequest->item, customData unused.
   8505             return true;
   8506         }
   8507     }
   8508 
   8509     return false;
   8510 }
   8511 
   8512 bool VmaBlockMetadata_Linear::CreateAllocationRequest_UpperAddress(
   8513     VkDeviceSize allocSize,
   8514     VkDeviceSize allocAlignment,
   8515     VmaSuballocationType allocType,
   8516     uint32_t strategy,
   8517     VmaAllocationRequest* pAllocationRequest)
   8518 {
   8519     const VkDeviceSize blockSize = GetSize();
   8520     const VkDeviceSize bufferImageGranularity = GetBufferImageGranularity();
   8521     SuballocationVectorType& suballocations1st = AccessSuballocations1st();
   8522     SuballocationVectorType& suballocations2nd = AccessSuballocations2nd();
   8523 
   8524     if (m_2ndVectorMode == SECOND_VECTOR_RING_BUFFER)
   8525     {
   8526         VMA_ASSERT(0 && "Trying to use pool with linear algorithm as double stack, while it is already being used as ring buffer.");
   8527         return false;
   8528     }
   8529 
   8530     // Try to allocate before 2nd.back(), or end of block if 2nd.empty().
   8531     if (allocSize > blockSize)
   8532     {
   8533         return false;
   8534     }
   8535     VkDeviceSize resultBaseOffset = blockSize - allocSize;
   8536     if (!suballocations2nd.empty())
   8537     {
   8538         const VmaSuballocation& lastSuballoc = suballocations2nd.back();
   8539         resultBaseOffset = lastSuballoc.offset - allocSize;
   8540         if (allocSize > lastSuballoc.offset)
   8541         {
   8542             return false;
   8543         }
   8544     }
   8545 
   8546     // Start from offset equal to end of free space.
   8547     VkDeviceSize resultOffset = resultBaseOffset;
   8548 
   8549     const VkDeviceSize debugMargin = GetDebugMargin();
   8550 
   8551     // Apply debugMargin at the end.
   8552     if (debugMargin > 0)
   8553     {
   8554         if (resultOffset < debugMargin)
   8555         {
   8556             return false;
   8557         }
   8558         resultOffset -= debugMargin;
   8559     }
   8560 
   8561     // Apply alignment.
   8562     resultOffset = VmaAlignDown(resultOffset, allocAlignment);
   8563 
   8564     // Check next suballocations from 2nd for BufferImageGranularity conflicts.
   8565     // Make bigger alignment if necessary.
   8566     if (bufferImageGranularity > 1 && bufferImageGranularity != allocAlignment && !suballocations2nd.empty())
   8567     {
   8568         bool bufferImageGranularityConflict = false;
   8569         for (size_t nextSuballocIndex = suballocations2nd.size(); nextSuballocIndex--; )
   8570         {
   8571             const VmaSuballocation& nextSuballoc = suballocations2nd[nextSuballocIndex];
   8572             if (VmaBlocksOnSamePage(resultOffset, allocSize, nextSuballoc.offset, bufferImageGranularity))
   8573             {
   8574                 if (VmaIsBufferImageGranularityConflict(nextSuballoc.type, allocType))
   8575                 {
   8576                     bufferImageGranularityConflict = true;
   8577                     break;
   8578                 }
   8579             }
   8580             else
   8581                 // Already on previous page.
   8582                 break;
   8583         }
   8584         if (bufferImageGranularityConflict)
   8585         {
   8586             resultOffset = VmaAlignDown(resultOffset, bufferImageGranularity);
   8587         }
   8588     }
   8589 
   8590     // There is enough free space.
   8591     const VkDeviceSize endOf1st = !suballocations1st.empty() ?
   8592         suballocations1st.back().offset + suballocations1st.back().size :
   8593         0;
   8594     if (endOf1st + debugMargin <= resultOffset)
   8595     {
   8596         // Check previous suballocations for BufferImageGranularity conflicts.
   8597         // If conflict exists, allocation cannot be made here.
   8598         if (bufferImageGranularity > 1)
   8599         {
   8600             for (size_t prevSuballocIndex = suballocations1st.size(); prevSuballocIndex--; )
   8601             {
   8602                 const VmaSuballocation& prevSuballoc = suballocations1st[prevSuballocIndex];
   8603                 if (VmaBlocksOnSamePage(prevSuballoc.offset, prevSuballoc.size, resultOffset, bufferImageGranularity))
   8604                 {
   8605                     if (VmaIsBufferImageGranularityConflict(allocType, prevSuballoc.type))
   8606                     {
   8607                         return false;
   8608                     }
   8609                 }
   8610                 else
   8611                 {
   8612                     // Already on next page.
   8613                     break;
   8614                 }
   8615             }
   8616         }
   8617 
   8618         // All tests passed: Success.
   8619         pAllocationRequest->allocHandle = (VmaAllocHandle)(resultOffset + 1);
   8620         // pAllocationRequest->item unused.
   8621         pAllocationRequest->type = VmaAllocationRequestType::UpperAddress;
   8622         return true;
   8623     }
   8624 
   8625     return false;
   8626 }
   8627 #endif // _VMA_BLOCK_METADATA_LINEAR_FUNCTIONS
   8628 #endif // _VMA_BLOCK_METADATA_LINEAR
   8629 
   8630 #ifndef _VMA_BLOCK_METADATA_TLSF
   8631 // To not search current larger region if first allocation won't succeed and skip to smaller range
   8632 // use with VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT as strategy in CreateAllocationRequest().
   8633 // When fragmentation and reusal of previous blocks doesn't matter then use with
   8634 // VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT for fastest alloc time possible.
   8635 class VmaBlockMetadata_TLSF : public VmaBlockMetadata
   8636 {
   8637     VMA_CLASS_NO_COPY_NO_MOVE(VmaBlockMetadata_TLSF)
   8638 public:
   8639     VmaBlockMetadata_TLSF(const VkAllocationCallbacks* pAllocationCallbacks,
   8640         VkDeviceSize bufferImageGranularity, bool isVirtual);
   8641     virtual ~VmaBlockMetadata_TLSF();
   8642 
   8643     size_t GetAllocationCount() const override { return m_AllocCount; }
   8644     size_t GetFreeRegionsCount() const override { return m_BlocksFreeCount + 1; }
   8645     VkDeviceSize GetSumFreeSize() const override { return m_BlocksFreeSize + m_NullBlock->size; }
   8646     bool IsEmpty() const override { return m_NullBlock->offset == 0; }
   8647     VkDeviceSize GetAllocationOffset(VmaAllocHandle allocHandle) const override { return ((Block*)allocHandle)->offset; }
   8648 
   8649     void Init(VkDeviceSize size) override;
   8650     bool Validate() const override;
   8651 
   8652     void AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const override;
   8653     void AddStatistics(VmaStatistics& inoutStats) const override;
   8654 
   8655 #if VMA_STATS_STRING_ENABLED
   8656     void PrintDetailedMap(class VmaJsonWriter& json) const override;
   8657 #endif
   8658 
   8659     bool CreateAllocationRequest(
   8660         VkDeviceSize allocSize,
   8661         VkDeviceSize allocAlignment,
   8662         bool upperAddress,
   8663         VmaSuballocationType allocType,
   8664         uint32_t strategy,
   8665         VmaAllocationRequest* pAllocationRequest) override;
   8666 
   8667     VkResult CheckCorruption(const void* pBlockData) override;
   8668     void Alloc(
   8669         const VmaAllocationRequest& request,
   8670         VmaSuballocationType type,
   8671         void* userData) override;
   8672 
   8673     void Free(VmaAllocHandle allocHandle) override;
   8674     void GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo) override;
   8675     void* GetAllocationUserData(VmaAllocHandle allocHandle) const override;
   8676     VmaAllocHandle GetAllocationListBegin() const override;
   8677     VmaAllocHandle GetNextAllocation(VmaAllocHandle prevAlloc) const override;
   8678     VkDeviceSize GetNextFreeRegionSize(VmaAllocHandle alloc) const override;
   8679     void Clear() override;
   8680     void SetAllocationUserData(VmaAllocHandle allocHandle, void* userData) override;
   8681     void DebugLogAllAllocations() const override;
   8682 
   8683 private:
   8684     // According to original paper it should be preferable 4 or 5:
   8685     // M. Masmano, I. Ripoll, A. Crespo, and J. Real "TLSF: a New Dynamic Memory Allocator for Real-Time Systems"
   8686     // http://www.gii.upv.es/tlsf/files/ecrts04_tlsf.pdf
   8687     static const uint8_t SECOND_LEVEL_INDEX = 5;
   8688     static const uint16_t SMALL_BUFFER_SIZE = 256;
   8689     static const uint32_t INITIAL_BLOCK_ALLOC_COUNT = 16;
   8690     static const uint8_t MEMORY_CLASS_SHIFT = 7;
   8691     static const uint8_t MAX_MEMORY_CLASSES = 65 - MEMORY_CLASS_SHIFT;
   8692 
   8693     class Block
   8694     {
   8695     public:
   8696         VkDeviceSize offset;
   8697         VkDeviceSize size;
   8698         Block* prevPhysical;
   8699         Block* nextPhysical;
   8700 
   8701         void MarkFree() { prevFree = VMA_NULL; }
   8702         void MarkTaken() { prevFree = this; }
   8703         bool IsFree() const { return prevFree != this; }
   8704         void*& UserData() { VMA_HEAVY_ASSERT(!IsFree()); return userData; }
   8705         Block*& PrevFree() { return prevFree; }
   8706         Block*& NextFree() { VMA_HEAVY_ASSERT(IsFree()); return nextFree; }
   8707 
   8708     private:
   8709         Block* prevFree; // Address of the same block here indicates that block is taken
   8710         union
   8711         {
   8712             Block* nextFree;
   8713             void* userData;
   8714         };
   8715     };
   8716 
   8717     size_t m_AllocCount;
   8718     // Total number of free blocks besides null block
   8719     size_t m_BlocksFreeCount;
   8720     // Total size of free blocks excluding null block
   8721     VkDeviceSize m_BlocksFreeSize;
   8722     uint32_t m_IsFreeBitmap;
   8723     uint8_t m_MemoryClasses;
   8724     uint32_t m_InnerIsFreeBitmap[MAX_MEMORY_CLASSES];
   8725     uint32_t m_ListsCount;
   8726     /*
   8727     * 0: 0-3 lists for small buffers
   8728     * 1+: 0-(2^SLI-1) lists for normal buffers
   8729     */
   8730     Block** m_FreeList;
   8731     VmaPoolAllocator<Block> m_BlockAllocator;
   8732     Block* m_NullBlock;
   8733     VmaBlockBufferImageGranularity m_GranularityHandler;
   8734 
   8735     uint8_t SizeToMemoryClass(VkDeviceSize size) const;
   8736     uint16_t SizeToSecondIndex(VkDeviceSize size, uint8_t memoryClass) const;
   8737     uint32_t GetListIndex(uint8_t memoryClass, uint16_t secondIndex) const;
   8738     uint32_t GetListIndex(VkDeviceSize size) const;
   8739 
   8740     void RemoveFreeBlock(Block* block);
   8741     void InsertFreeBlock(Block* block);
   8742     void MergeBlock(Block* block, Block* prev);
   8743 
   8744     Block* FindFreeBlock(VkDeviceSize size, uint32_t& listIndex) const;
   8745     bool CheckBlock(
   8746         Block& block,
   8747         uint32_t listIndex,
   8748         VkDeviceSize allocSize,
   8749         VkDeviceSize allocAlignment,
   8750         VmaSuballocationType allocType,
   8751         VmaAllocationRequest* pAllocationRequest);
   8752 };
   8753 
   8754 #ifndef _VMA_BLOCK_METADATA_TLSF_FUNCTIONS
   8755 VmaBlockMetadata_TLSF::VmaBlockMetadata_TLSF(const VkAllocationCallbacks* pAllocationCallbacks,
   8756     VkDeviceSize bufferImageGranularity, bool isVirtual)
   8757     : VmaBlockMetadata(pAllocationCallbacks, bufferImageGranularity, isVirtual),
   8758     m_AllocCount(0),
   8759     m_BlocksFreeCount(0),
   8760     m_BlocksFreeSize(0),
   8761     m_IsFreeBitmap(0),
   8762     m_MemoryClasses(0),
   8763     m_ListsCount(0),
   8764     m_FreeList(VMA_NULL),
   8765     m_BlockAllocator(pAllocationCallbacks, INITIAL_BLOCK_ALLOC_COUNT),
   8766     m_NullBlock(VMA_NULL),
   8767     m_GranularityHandler(bufferImageGranularity) {}
   8768 
   8769 VmaBlockMetadata_TLSF::~VmaBlockMetadata_TLSF()
   8770 {
   8771     if (m_FreeList)
   8772         vma_delete_array(GetAllocationCallbacks(), m_FreeList, m_ListsCount);
   8773     m_GranularityHandler.Destroy(GetAllocationCallbacks());
   8774 }
   8775 
   8776 void VmaBlockMetadata_TLSF::Init(VkDeviceSize size)
   8777 {
   8778     VmaBlockMetadata::Init(size);
   8779 
   8780     if (!IsVirtual())
   8781         m_GranularityHandler.Init(GetAllocationCallbacks(), size);
   8782 
   8783     m_NullBlock = m_BlockAllocator.Alloc();
   8784     m_NullBlock->size = size;
   8785     m_NullBlock->offset = 0;
   8786     m_NullBlock->prevPhysical = VMA_NULL;
   8787     m_NullBlock->nextPhysical = VMA_NULL;
   8788     m_NullBlock->MarkFree();
   8789     m_NullBlock->NextFree() = VMA_NULL;
   8790     m_NullBlock->PrevFree() = VMA_NULL;
   8791     uint8_t memoryClass = SizeToMemoryClass(size);
   8792     uint16_t sli = SizeToSecondIndex(size, memoryClass);
   8793     m_ListsCount = (memoryClass == 0 ? 0 : (memoryClass - 1) * (1UL << SECOND_LEVEL_INDEX) + sli) + 1;
   8794     if (IsVirtual())
   8795         m_ListsCount += 1UL << SECOND_LEVEL_INDEX;
   8796     else
   8797         m_ListsCount += 4;
   8798 
   8799     m_MemoryClasses = memoryClass + uint8_t(2);
   8800     memset(m_InnerIsFreeBitmap, 0, MAX_MEMORY_CLASSES * sizeof(uint32_t));
   8801 
   8802     m_FreeList = vma_new_array(GetAllocationCallbacks(), Block*, m_ListsCount);
   8803     memset(m_FreeList, 0, m_ListsCount * sizeof(Block*));
   8804 }
   8805 
   8806 bool VmaBlockMetadata_TLSF::Validate() const
   8807 {
   8808     VMA_VALIDATE(GetSumFreeSize() <= GetSize());
   8809 
   8810     VkDeviceSize calculatedSize = m_NullBlock->size;
   8811     VkDeviceSize calculatedFreeSize = m_NullBlock->size;
   8812     size_t allocCount = 0;
   8813     size_t freeCount = 0;
   8814 
   8815     // Check integrity of free lists
   8816     for (uint32_t list = 0; list < m_ListsCount; ++list)
   8817     {
   8818         Block* block = m_FreeList[list];
   8819         if (block != VMA_NULL)
   8820         {
   8821             VMA_VALIDATE(block->IsFree());
   8822             VMA_VALIDATE(block->PrevFree() == VMA_NULL);
   8823             while (block->NextFree())
   8824             {
   8825                 VMA_VALIDATE(block->NextFree()->IsFree());
   8826                 VMA_VALIDATE(block->NextFree()->PrevFree() == block);
   8827                 block = block->NextFree();
   8828             }
   8829         }
   8830     }
   8831 
   8832     VkDeviceSize nextOffset = m_NullBlock->offset;
   8833     auto validateCtx = m_GranularityHandler.StartValidation(GetAllocationCallbacks(), IsVirtual());
   8834 
   8835     VMA_VALIDATE(m_NullBlock->nextPhysical == VMA_NULL);
   8836     if (m_NullBlock->prevPhysical)
   8837     {
   8838         VMA_VALIDATE(m_NullBlock->prevPhysical->nextPhysical == m_NullBlock);
   8839     }
   8840     // Check all blocks
   8841     for (Block* prev = m_NullBlock->prevPhysical; prev != VMA_NULL; prev = prev->prevPhysical)
   8842     {
   8843         VMA_VALIDATE(prev->offset + prev->size == nextOffset);
   8844         nextOffset = prev->offset;
   8845         calculatedSize += prev->size;
   8846 
   8847         uint32_t listIndex = GetListIndex(prev->size);
   8848         if (prev->IsFree())
   8849         {
   8850             ++freeCount;
   8851             // Check if free block belongs to free list
   8852             Block* freeBlock = m_FreeList[listIndex];
   8853             VMA_VALIDATE(freeBlock != VMA_NULL);
   8854 
   8855             bool found = false;
   8856             do
   8857             {
   8858                 if (freeBlock == prev)
   8859                     found = true;
   8860 
   8861                 freeBlock = freeBlock->NextFree();
   8862             } while (!found && freeBlock != VMA_NULL);
   8863 
   8864             VMA_VALIDATE(found);
   8865             calculatedFreeSize += prev->size;
   8866         }
   8867         else
   8868         {
   8869             ++allocCount;
   8870             // Check if taken block is not on a free list
   8871             Block* freeBlock = m_FreeList[listIndex];
   8872             while (freeBlock)
   8873             {
   8874                 VMA_VALIDATE(freeBlock != prev);
   8875                 freeBlock = freeBlock->NextFree();
   8876             }
   8877 
   8878             if (!IsVirtual())
   8879             {
   8880                 VMA_VALIDATE(m_GranularityHandler.Validate(validateCtx, prev->offset, prev->size));
   8881             }
   8882         }
   8883 
   8884         if (prev->prevPhysical)
   8885         {
   8886             VMA_VALIDATE(prev->prevPhysical->nextPhysical == prev);
   8887         }
   8888     }
   8889 
   8890     if (!IsVirtual())
   8891     {
   8892         VMA_VALIDATE(m_GranularityHandler.FinishValidation(validateCtx));
   8893     }
   8894 
   8895     VMA_VALIDATE(nextOffset == 0);
   8896     VMA_VALIDATE(calculatedSize == GetSize());
   8897     VMA_VALIDATE(calculatedFreeSize == GetSumFreeSize());
   8898     VMA_VALIDATE(allocCount == m_AllocCount);
   8899     VMA_VALIDATE(freeCount == m_BlocksFreeCount);
   8900 
   8901     return true;
   8902 }
   8903 
   8904 void VmaBlockMetadata_TLSF::AddDetailedStatistics(VmaDetailedStatistics& inoutStats) const
   8905 {
   8906     inoutStats.statistics.blockCount++;
   8907     inoutStats.statistics.blockBytes += GetSize();
   8908     if (m_NullBlock->size > 0)
   8909         VmaAddDetailedStatisticsUnusedRange(inoutStats, m_NullBlock->size);
   8910 
   8911     for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
   8912     {
   8913         if (block->IsFree())
   8914             VmaAddDetailedStatisticsUnusedRange(inoutStats, block->size);
   8915         else
   8916             VmaAddDetailedStatisticsAllocation(inoutStats, block->size);
   8917     }
   8918 }
   8919 
   8920 void VmaBlockMetadata_TLSF::AddStatistics(VmaStatistics& inoutStats) const
   8921 {
   8922     inoutStats.blockCount++;
   8923     inoutStats.allocationCount += (uint32_t)m_AllocCount;
   8924     inoutStats.blockBytes += GetSize();
   8925     inoutStats.allocationBytes += GetSize() - GetSumFreeSize();
   8926 }
   8927 
   8928 #if VMA_STATS_STRING_ENABLED
   8929 void VmaBlockMetadata_TLSF::PrintDetailedMap(class VmaJsonWriter& json) const
   8930 {
   8931     size_t blockCount = m_AllocCount + m_BlocksFreeCount;
   8932     VmaStlAllocator<Block*> allocator(GetAllocationCallbacks());
   8933     VmaVector<Block*, VmaStlAllocator<Block*>> blockList(blockCount, allocator);
   8934 
   8935     size_t i = blockCount;
   8936     for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
   8937     {
   8938         blockList[--i] = block;
   8939     }
   8940     VMA_ASSERT(i == 0);
   8941 
   8942     VmaDetailedStatistics stats;
   8943     VmaClearDetailedStatistics(stats);
   8944     AddDetailedStatistics(stats);
   8945 
   8946     PrintDetailedMap_Begin(json,
   8947         stats.statistics.blockBytes - stats.statistics.allocationBytes,
   8948         stats.statistics.allocationCount,
   8949         stats.unusedRangeCount);
   8950 
   8951     for (; i < blockCount; ++i)
   8952     {
   8953         Block* block = blockList[i];
   8954         if (block->IsFree())
   8955             PrintDetailedMap_UnusedRange(json, block->offset, block->size);
   8956         else
   8957             PrintDetailedMap_Allocation(json, block->offset, block->size, block->UserData());
   8958     }
   8959     if (m_NullBlock->size > 0)
   8960         PrintDetailedMap_UnusedRange(json, m_NullBlock->offset, m_NullBlock->size);
   8961 
   8962     PrintDetailedMap_End(json);
   8963 }
   8964 #endif
   8965 
   8966 bool VmaBlockMetadata_TLSF::CreateAllocationRequest(
   8967     VkDeviceSize allocSize,
   8968     VkDeviceSize allocAlignment,
   8969     bool upperAddress,
   8970     VmaSuballocationType allocType,
   8971     uint32_t strategy,
   8972     VmaAllocationRequest* pAllocationRequest)
   8973 {
   8974     VMA_ASSERT(allocSize > 0 && "Cannot allocate empty block!");
   8975     VMA_ASSERT(!upperAddress && "VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT can be used only with linear algorithm.");
   8976 
   8977     // For small granularity round up
   8978     if (!IsVirtual())
   8979         m_GranularityHandler.RoundupAllocRequest(allocType, allocSize, allocAlignment);
   8980 
   8981     allocSize += GetDebugMargin();
   8982     // Quick check for too small pool
   8983     if (allocSize > GetSumFreeSize())
   8984         return false;
   8985 
   8986     // If no free blocks in pool then check only null block
   8987     if (m_BlocksFreeCount == 0)
   8988         return CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest);
   8989 
   8990     // Round up to the next block
   8991     VkDeviceSize sizeForNextList = allocSize;
   8992     VkDeviceSize smallSizeStep = VkDeviceSize(SMALL_BUFFER_SIZE / (IsVirtual() ? 1 << SECOND_LEVEL_INDEX : 4));
   8993     if (allocSize > SMALL_BUFFER_SIZE)
   8994     {
   8995         sizeForNextList += (1ULL << (VMA_BITSCAN_MSB(allocSize) - SECOND_LEVEL_INDEX));
   8996     }
   8997     else if (allocSize > SMALL_BUFFER_SIZE - smallSizeStep)
   8998         sizeForNextList = SMALL_BUFFER_SIZE + 1;
   8999     else
   9000         sizeForNextList += smallSizeStep;
   9001 
   9002     uint32_t nextListIndex = m_ListsCount;
   9003     uint32_t prevListIndex = m_ListsCount;
   9004     Block* nextListBlock = VMA_NULL;
   9005     Block* prevListBlock = VMA_NULL;
   9006 
   9007     // Check blocks according to strategies
   9008     if (strategy & VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT)
   9009     {
   9010         // Quick check for larger block first
   9011         nextListBlock = FindFreeBlock(sizeForNextList, nextListIndex);
   9012         if (nextListBlock != VMA_NULL && CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
   9013             return true;
   9014 
   9015         // If not fitted then null block
   9016         if (CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
   9017             return true;
   9018 
   9019         // Null block failed, search larger bucket
   9020         while (nextListBlock)
   9021         {
   9022             if (CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
   9023                 return true;
   9024             nextListBlock = nextListBlock->NextFree();
   9025         }
   9026 
   9027         // Failed again, check best fit bucket
   9028         prevListBlock = FindFreeBlock(allocSize, prevListIndex);
   9029         while (prevListBlock)
   9030         {
   9031             if (CheckBlock(*prevListBlock, prevListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
   9032                 return true;
   9033             prevListBlock = prevListBlock->NextFree();
   9034         }
   9035     }
   9036     else if (strategy & VMA_ALLOCATION_CREATE_STRATEGY_MIN_MEMORY_BIT)
   9037     {
   9038         // Check best fit bucket
   9039         prevListBlock = FindFreeBlock(allocSize, prevListIndex);
   9040         while (prevListBlock)
   9041         {
   9042             if (CheckBlock(*prevListBlock, prevListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
   9043                 return true;
   9044             prevListBlock = prevListBlock->NextFree();
   9045         }
   9046 
   9047         // If failed check null block
   9048         if (CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
   9049             return true;
   9050 
   9051         // Check larger bucket
   9052         nextListBlock = FindFreeBlock(sizeForNextList, nextListIndex);
   9053         while (nextListBlock)
   9054         {
   9055             if (CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
   9056                 return true;
   9057             nextListBlock = nextListBlock->NextFree();
   9058         }
   9059     }
   9060     else if (strategy & VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT )
   9061     {
   9062         // Perform search from the start
   9063         VmaStlAllocator<Block*> allocator(GetAllocationCallbacks());
   9064         VmaVector<Block*, VmaStlAllocator<Block*>> blockList(m_BlocksFreeCount, allocator);
   9065 
   9066         size_t i = m_BlocksFreeCount;
   9067         for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
   9068         {
   9069             if (block->IsFree() && block->size >= allocSize)
   9070                 blockList[--i] = block;
   9071         }
   9072 
   9073         for (; i < m_BlocksFreeCount; ++i)
   9074         {
   9075             Block& block = *blockList[i];
   9076             if (CheckBlock(block, GetListIndex(block.size), allocSize, allocAlignment, allocType, pAllocationRequest))
   9077                 return true;
   9078         }
   9079 
   9080         // If failed check null block
   9081         if (CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
   9082             return true;
   9083 
   9084         // Whole range searched, no more memory
   9085         return false;
   9086     }
   9087     else
   9088     {
   9089         // Check larger bucket
   9090         nextListBlock = FindFreeBlock(sizeForNextList, nextListIndex);
   9091         while (nextListBlock)
   9092         {
   9093             if (CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
   9094                 return true;
   9095             nextListBlock = nextListBlock->NextFree();
   9096         }
   9097 
   9098         // If failed check null block
   9099         if (CheckBlock(*m_NullBlock, m_ListsCount, allocSize, allocAlignment, allocType, pAllocationRequest))
   9100             return true;
   9101 
   9102         // Check best fit bucket
   9103         prevListBlock = FindFreeBlock(allocSize, prevListIndex);
   9104         while (prevListBlock)
   9105         {
   9106             if (CheckBlock(*prevListBlock, prevListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
   9107                 return true;
   9108             prevListBlock = prevListBlock->NextFree();
   9109         }
   9110     }
   9111 
   9112     // Worst case, full search has to be done
   9113     while (++nextListIndex < m_ListsCount)
   9114     {
   9115         nextListBlock = m_FreeList[nextListIndex];
   9116         while (nextListBlock)
   9117         {
   9118             if (CheckBlock(*nextListBlock, nextListIndex, allocSize, allocAlignment, allocType, pAllocationRequest))
   9119                 return true;
   9120             nextListBlock = nextListBlock->NextFree();
   9121         }
   9122     }
   9123 
   9124     // No more memory sadly
   9125     return false;
   9126 }
   9127 
   9128 VkResult VmaBlockMetadata_TLSF::CheckCorruption(const void* pBlockData)
   9129 {
   9130     for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
   9131     {
   9132         if (!block->IsFree())
   9133         {
   9134             if (!VmaValidateMagicValue(pBlockData, block->offset + block->size))
   9135             {
   9136                 VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER VALIDATED ALLOCATION!");
   9137                 return VK_ERROR_UNKNOWN_COPY;
   9138             }
   9139         }
   9140     }
   9141 
   9142     return VK_SUCCESS;
   9143 }
   9144 
   9145 void VmaBlockMetadata_TLSF::Alloc(
   9146     const VmaAllocationRequest& request,
   9147     VmaSuballocationType type,
   9148     void* userData)
   9149 {
   9150     VMA_ASSERT(request.type == VmaAllocationRequestType::TLSF);
   9151 
   9152     // Get block and pop it from the free list
   9153     Block* currentBlock = (Block*)request.allocHandle;
   9154     VkDeviceSize offset = request.algorithmData;
   9155     VMA_ASSERT(currentBlock != VMA_NULL);
   9156     VMA_ASSERT(currentBlock->offset <= offset);
   9157 
   9158     if (currentBlock != m_NullBlock)
   9159         RemoveFreeBlock(currentBlock);
   9160 
   9161     VkDeviceSize debugMargin = GetDebugMargin();
   9162     VkDeviceSize misssingAlignment = offset - currentBlock->offset;
   9163 
   9164     // Append missing alignment to prev block or create new one
   9165     if (misssingAlignment)
   9166     {
   9167         Block* prevBlock = currentBlock->prevPhysical;
   9168         VMA_ASSERT(prevBlock != VMA_NULL && "There should be no missing alignment at offset 0!");
   9169 
   9170         if (prevBlock->IsFree() && prevBlock->size != debugMargin)
   9171         {
   9172             uint32_t oldList = GetListIndex(prevBlock->size);
   9173             prevBlock->size += misssingAlignment;
   9174             // Check if new size crosses list bucket
   9175             if (oldList != GetListIndex(prevBlock->size))
   9176             {
   9177                 prevBlock->size -= misssingAlignment;
   9178                 RemoveFreeBlock(prevBlock);
   9179                 prevBlock->size += misssingAlignment;
   9180                 InsertFreeBlock(prevBlock);
   9181             }
   9182             else
   9183                 m_BlocksFreeSize += misssingAlignment;
   9184         }
   9185         else
   9186         {
   9187             Block* newBlock = m_BlockAllocator.Alloc();
   9188             currentBlock->prevPhysical = newBlock;
   9189             prevBlock->nextPhysical = newBlock;
   9190             newBlock->prevPhysical = prevBlock;
   9191             newBlock->nextPhysical = currentBlock;
   9192             newBlock->size = misssingAlignment;
   9193             newBlock->offset = currentBlock->offset;
   9194             newBlock->MarkTaken();
   9195 
   9196             InsertFreeBlock(newBlock);
   9197         }
   9198 
   9199         currentBlock->size -= misssingAlignment;
   9200         currentBlock->offset += misssingAlignment;
   9201     }
   9202 
   9203     VkDeviceSize size = request.size + debugMargin;
   9204     if (currentBlock->size == size)
   9205     {
   9206         if (currentBlock == m_NullBlock)
   9207         {
   9208             // Setup new null block
   9209             m_NullBlock = m_BlockAllocator.Alloc();
   9210             m_NullBlock->size = 0;
   9211             m_NullBlock->offset = currentBlock->offset + size;
   9212             m_NullBlock->prevPhysical = currentBlock;
   9213             m_NullBlock->nextPhysical = VMA_NULL;
   9214             m_NullBlock->MarkFree();
   9215             m_NullBlock->PrevFree() = VMA_NULL;
   9216             m_NullBlock->NextFree() = VMA_NULL;
   9217             currentBlock->nextPhysical = m_NullBlock;
   9218             currentBlock->MarkTaken();
   9219         }
   9220     }
   9221     else
   9222     {
   9223         VMA_ASSERT(currentBlock->size > size && "Proper block already found, shouldn't find smaller one!");
   9224 
   9225         // Create new free block
   9226         Block* newBlock = m_BlockAllocator.Alloc();
   9227         newBlock->size = currentBlock->size - size;
   9228         newBlock->offset = currentBlock->offset + size;
   9229         newBlock->prevPhysical = currentBlock;
   9230         newBlock->nextPhysical = currentBlock->nextPhysical;
   9231         currentBlock->nextPhysical = newBlock;
   9232         currentBlock->size = size;
   9233 
   9234         if (currentBlock == m_NullBlock)
   9235         {
   9236             m_NullBlock = newBlock;
   9237             m_NullBlock->MarkFree();
   9238             m_NullBlock->NextFree() = VMA_NULL;
   9239             m_NullBlock->PrevFree() = VMA_NULL;
   9240             currentBlock->MarkTaken();
   9241         }
   9242         else
   9243         {
   9244             newBlock->nextPhysical->prevPhysical = newBlock;
   9245             newBlock->MarkTaken();
   9246             InsertFreeBlock(newBlock);
   9247         }
   9248     }
   9249     currentBlock->UserData() = userData;
   9250 
   9251     if (debugMargin > 0)
   9252     {
   9253         currentBlock->size -= debugMargin;
   9254         Block* newBlock = m_BlockAllocator.Alloc();
   9255         newBlock->size = debugMargin;
   9256         newBlock->offset = currentBlock->offset + currentBlock->size;
   9257         newBlock->prevPhysical = currentBlock;
   9258         newBlock->nextPhysical = currentBlock->nextPhysical;
   9259         newBlock->MarkTaken();
   9260         currentBlock->nextPhysical->prevPhysical = newBlock;
   9261         currentBlock->nextPhysical = newBlock;
   9262         InsertFreeBlock(newBlock);
   9263     }
   9264 
   9265     if (!IsVirtual())
   9266         m_GranularityHandler.AllocPages((uint8_t)(uintptr_t)request.customData,
   9267             currentBlock->offset, currentBlock->size);
   9268     ++m_AllocCount;
   9269 }
   9270 
   9271 void VmaBlockMetadata_TLSF::Free(VmaAllocHandle allocHandle)
   9272 {
   9273     Block* block = (Block*)allocHandle;
   9274     Block* next = block->nextPhysical;
   9275     VMA_ASSERT(!block->IsFree() && "Block is already free!");
   9276 
   9277     if (!IsVirtual())
   9278         m_GranularityHandler.FreePages(block->offset, block->size);
   9279     --m_AllocCount;
   9280 
   9281     VkDeviceSize debugMargin = GetDebugMargin();
   9282     if (debugMargin > 0)
   9283     {
   9284         RemoveFreeBlock(next);
   9285         MergeBlock(next, block);
   9286         block = next;
   9287         next = next->nextPhysical;
   9288     }
   9289 
   9290     // Try merging
   9291     Block* prev = block->prevPhysical;
   9292     if (prev != VMA_NULL && prev->IsFree() && prev->size != debugMargin)
   9293     {
   9294         RemoveFreeBlock(prev);
   9295         MergeBlock(block, prev);
   9296     }
   9297 
   9298     if (!next->IsFree())
   9299         InsertFreeBlock(block);
   9300     else if (next == m_NullBlock)
   9301         MergeBlock(m_NullBlock, block);
   9302     else
   9303     {
   9304         RemoveFreeBlock(next);
   9305         MergeBlock(next, block);
   9306         InsertFreeBlock(next);
   9307     }
   9308 }
   9309 
   9310 void VmaBlockMetadata_TLSF::GetAllocationInfo(VmaAllocHandle allocHandle, VmaVirtualAllocationInfo& outInfo)
   9311 {
   9312     Block* block = (Block*)allocHandle;
   9313     VMA_ASSERT(!block->IsFree() && "Cannot get allocation info for free block!");
   9314     outInfo.offset = block->offset;
   9315     outInfo.size = block->size;
   9316     outInfo.pUserData = block->UserData();
   9317 }
   9318 
   9319 void* VmaBlockMetadata_TLSF::GetAllocationUserData(VmaAllocHandle allocHandle) const
   9320 {
   9321     Block* block = (Block*)allocHandle;
   9322     VMA_ASSERT(!block->IsFree() && "Cannot get user data for free block!");
   9323     return block->UserData();
   9324 }
   9325 
   9326 VmaAllocHandle VmaBlockMetadata_TLSF::GetAllocationListBegin() const
   9327 {
   9328     if (m_AllocCount == 0)
   9329         return VK_NULL_HANDLE;
   9330 
   9331     for (Block* block = m_NullBlock->prevPhysical; block; block = block->prevPhysical)
   9332     {
   9333         if (!block->IsFree())
   9334             return (VmaAllocHandle)block;
   9335     }
   9336     VMA_ASSERT(false && "If m_AllocCount > 0 then should find any allocation!");
   9337     return VK_NULL_HANDLE;
   9338 }
   9339 
   9340 VmaAllocHandle VmaBlockMetadata_TLSF::GetNextAllocation(VmaAllocHandle prevAlloc) const
   9341 {
   9342     Block* startBlock = (Block*)prevAlloc;
   9343     VMA_ASSERT(!startBlock->IsFree() && "Incorrect block!");
   9344 
   9345     for (Block* block = startBlock->prevPhysical; block; block = block->prevPhysical)
   9346     {
   9347         if (!block->IsFree())
   9348             return (VmaAllocHandle)block;
   9349     }
   9350     return VK_NULL_HANDLE;
   9351 }
   9352 
   9353 VkDeviceSize VmaBlockMetadata_TLSF::GetNextFreeRegionSize(VmaAllocHandle alloc) const
   9354 {
   9355     Block* block = (Block*)alloc;
   9356     VMA_ASSERT(!block->IsFree() && "Incorrect block!");
   9357 
   9358     if (block->prevPhysical)
   9359         return block->prevPhysical->IsFree() ? block->prevPhysical->size : 0;
   9360     return 0;
   9361 }
   9362 
   9363 void VmaBlockMetadata_TLSF::Clear()
   9364 {
   9365     m_AllocCount = 0;
   9366     m_BlocksFreeCount = 0;
   9367     m_BlocksFreeSize = 0;
   9368     m_IsFreeBitmap = 0;
   9369     m_NullBlock->offset = 0;
   9370     m_NullBlock->size = GetSize();
   9371     Block* block = m_NullBlock->prevPhysical;
   9372     m_NullBlock->prevPhysical = VMA_NULL;
   9373     while (block)
   9374     {
   9375         Block* prev = block->prevPhysical;
   9376         m_BlockAllocator.Free(block);
   9377         block = prev;
   9378     }
   9379     memset(m_FreeList, 0, m_ListsCount * sizeof(Block*));
   9380     memset(m_InnerIsFreeBitmap, 0, m_MemoryClasses * sizeof(uint32_t));
   9381     m_GranularityHandler.Clear();
   9382 }
   9383 
   9384 void VmaBlockMetadata_TLSF::SetAllocationUserData(VmaAllocHandle allocHandle, void* userData)
   9385 {
   9386     Block* block = (Block*)allocHandle;
   9387     VMA_ASSERT(!block->IsFree() && "Trying to set user data for not allocated block!");
   9388     block->UserData() = userData;
   9389 }
   9390 
   9391 void VmaBlockMetadata_TLSF::DebugLogAllAllocations() const
   9392 {
   9393     for (Block* block = m_NullBlock->prevPhysical; block != VMA_NULL; block = block->prevPhysical)
   9394         if (!block->IsFree())
   9395             DebugLogAllocation(block->offset, block->size, block->UserData());
   9396 }
   9397 
   9398 uint8_t VmaBlockMetadata_TLSF::SizeToMemoryClass(VkDeviceSize size) const
   9399 {
   9400     if (size > SMALL_BUFFER_SIZE)
   9401         return uint8_t(VMA_BITSCAN_MSB(size) - MEMORY_CLASS_SHIFT);
   9402     return 0;
   9403 }
   9404 
   9405 uint16_t VmaBlockMetadata_TLSF::SizeToSecondIndex(VkDeviceSize size, uint8_t memoryClass) const
   9406 {
   9407     if (memoryClass == 0)
   9408     {
   9409         if (IsVirtual())
   9410             return static_cast<uint16_t>((size - 1) / 8);
   9411         else
   9412             return static_cast<uint16_t>((size - 1) / 64);
   9413     }
   9414     return static_cast<uint16_t>((size >> (memoryClass + MEMORY_CLASS_SHIFT - SECOND_LEVEL_INDEX)) ^ (1U << SECOND_LEVEL_INDEX));
   9415 }
   9416 
   9417 uint32_t VmaBlockMetadata_TLSF::GetListIndex(uint8_t memoryClass, uint16_t secondIndex) const
   9418 {
   9419     if (memoryClass == 0)
   9420         return secondIndex;
   9421 
   9422     const uint32_t index = static_cast<uint32_t>(memoryClass - 1) * (1 << SECOND_LEVEL_INDEX) + secondIndex;
   9423     if (IsVirtual())
   9424         return index + (1 << SECOND_LEVEL_INDEX);
   9425     else
   9426         return index + 4;
   9427 }
   9428 
   9429 uint32_t VmaBlockMetadata_TLSF::GetListIndex(VkDeviceSize size) const
   9430 {
   9431     uint8_t memoryClass = SizeToMemoryClass(size);
   9432     return GetListIndex(memoryClass, SizeToSecondIndex(size, memoryClass));
   9433 }
   9434 
   9435 void VmaBlockMetadata_TLSF::RemoveFreeBlock(Block* block)
   9436 {
   9437     VMA_ASSERT(block != m_NullBlock);
   9438     VMA_ASSERT(block->IsFree());
   9439 
   9440     if (block->NextFree() != VMA_NULL)
   9441         block->NextFree()->PrevFree() = block->PrevFree();
   9442     if (block->PrevFree() != VMA_NULL)
   9443         block->PrevFree()->NextFree() = block->NextFree();
   9444     else
   9445     {
   9446         uint8_t memClass = SizeToMemoryClass(block->size);
   9447         uint16_t secondIndex = SizeToSecondIndex(block->size, memClass);
   9448         uint32_t index = GetListIndex(memClass, secondIndex);
   9449         VMA_ASSERT(m_FreeList[index] == block);
   9450         m_FreeList[index] = block->NextFree();
   9451         if (block->NextFree() == VMA_NULL)
   9452         {
   9453             m_InnerIsFreeBitmap[memClass] &= ~(1U << secondIndex);
   9454             if (m_InnerIsFreeBitmap[memClass] == 0)
   9455                 m_IsFreeBitmap &= ~(1UL << memClass);
   9456         }
   9457     }
   9458     block->MarkTaken();
   9459     block->UserData() = VMA_NULL;
   9460     --m_BlocksFreeCount;
   9461     m_BlocksFreeSize -= block->size;
   9462 }
   9463 
   9464 void VmaBlockMetadata_TLSF::InsertFreeBlock(Block* block)
   9465 {
   9466     VMA_ASSERT(block != m_NullBlock);
   9467     VMA_ASSERT(!block->IsFree() && "Cannot insert block twice!");
   9468 
   9469     uint8_t memClass = SizeToMemoryClass(block->size);
   9470     uint16_t secondIndex = SizeToSecondIndex(block->size, memClass);
   9471     uint32_t index = GetListIndex(memClass, secondIndex);
   9472     VMA_ASSERT(index < m_ListsCount);
   9473     block->PrevFree() = VMA_NULL;
   9474     block->NextFree() = m_FreeList[index];
   9475     m_FreeList[index] = block;
   9476     if (block->NextFree() != VMA_NULL)
   9477         block->NextFree()->PrevFree() = block;
   9478     else
   9479     {
   9480         m_InnerIsFreeBitmap[memClass] |= 1U << secondIndex;
   9481         m_IsFreeBitmap |= 1UL << memClass;
   9482     }
   9483     ++m_BlocksFreeCount;
   9484     m_BlocksFreeSize += block->size;
   9485 }
   9486 
   9487 void VmaBlockMetadata_TLSF::MergeBlock(Block* block, Block* prev)
   9488 {
   9489     VMA_ASSERT(block->prevPhysical == prev && "Cannot merge separate physical regions!");
   9490     VMA_ASSERT(!prev->IsFree() && "Cannot merge block that belongs to free list!");
   9491 
   9492     block->offset = prev->offset;
   9493     block->size += prev->size;
   9494     block->prevPhysical = prev->prevPhysical;
   9495     if (block->prevPhysical)
   9496         block->prevPhysical->nextPhysical = block;
   9497     m_BlockAllocator.Free(prev);
   9498 }
   9499 
   9500 VmaBlockMetadata_TLSF::Block* VmaBlockMetadata_TLSF::FindFreeBlock(VkDeviceSize size, uint32_t& listIndex) const
   9501 {
   9502     uint8_t memoryClass = SizeToMemoryClass(size);
   9503     uint32_t innerFreeMap = m_InnerIsFreeBitmap[memoryClass] & (~0U << SizeToSecondIndex(size, memoryClass));
   9504     if (!innerFreeMap)
   9505     {
   9506         // Check higher levels for available blocks
   9507         uint32_t freeMap = m_IsFreeBitmap & (~0UL << (memoryClass + 1));
   9508         if (!freeMap)
   9509             return VMA_NULL; // No more memory available
   9510 
   9511         // Find lowest free region
   9512         memoryClass = VMA_BITSCAN_LSB(freeMap);
   9513         innerFreeMap = m_InnerIsFreeBitmap[memoryClass];
   9514         VMA_ASSERT(innerFreeMap != 0);
   9515     }
   9516     // Find lowest free subregion
   9517     listIndex = GetListIndex(memoryClass, VMA_BITSCAN_LSB(innerFreeMap));
   9518     VMA_ASSERT(m_FreeList[listIndex]);
   9519     return m_FreeList[listIndex];
   9520 }
   9521 
   9522 bool VmaBlockMetadata_TLSF::CheckBlock(
   9523     Block& block,
   9524     uint32_t listIndex,
   9525     VkDeviceSize allocSize,
   9526     VkDeviceSize allocAlignment,
   9527     VmaSuballocationType allocType,
   9528     VmaAllocationRequest* pAllocationRequest)
   9529 {
   9530     VMA_ASSERT(block.IsFree() && "Block is already taken!");
   9531 
   9532     VkDeviceSize alignedOffset = VmaAlignUp(block.offset, allocAlignment);
   9533     if (block.size < allocSize + alignedOffset - block.offset)
   9534         return false;
   9535 
   9536     // Check for granularity conflicts
   9537     if (!IsVirtual() &&
   9538         m_GranularityHandler.CheckConflictAndAlignUp(alignedOffset, allocSize, block.offset, block.size, allocType))
   9539         return false;
   9540 
   9541     // Alloc successful
   9542     pAllocationRequest->type = VmaAllocationRequestType::TLSF;
   9543     pAllocationRequest->allocHandle = (VmaAllocHandle)&block;
   9544     pAllocationRequest->size = allocSize - GetDebugMargin();
   9545     pAllocationRequest->customData = (void*)allocType;
   9546     pAllocationRequest->algorithmData = alignedOffset;
   9547 
   9548     // Place block at the start of list if it's normal block
   9549     if (listIndex != m_ListsCount && block.PrevFree())
   9550     {
   9551         block.PrevFree()->NextFree() = block.NextFree();
   9552         if (block.NextFree())
   9553             block.NextFree()->PrevFree() = block.PrevFree();
   9554         block.PrevFree() = VMA_NULL;
   9555         block.NextFree() = m_FreeList[listIndex];
   9556         m_FreeList[listIndex] = &block;
   9557         if (block.NextFree())
   9558             block.NextFree()->PrevFree() = &block;
   9559     }
   9560 
   9561     return true;
   9562 }
   9563 #endif // _VMA_BLOCK_METADATA_TLSF_FUNCTIONS
   9564 #endif // _VMA_BLOCK_METADATA_TLSF
   9565 
   9566 #ifndef _VMA_BLOCK_VECTOR
   9567 /*
   9568 Sequence of VmaDeviceMemoryBlock. Represents memory blocks allocated for a specific
   9569 Vulkan memory type.
   9570 
   9571 Synchronized internally with a mutex.
   9572 */
   9573 class VmaBlockVector
   9574 {
   9575     friend struct VmaDefragmentationContext_T;
   9576     VMA_CLASS_NO_COPY_NO_MOVE(VmaBlockVector)
   9577 public:
   9578     VmaBlockVector(
   9579         VmaAllocator hAllocator,
   9580         VmaPool hParentPool,
   9581         uint32_t memoryTypeIndex,
   9582         VkDeviceSize preferredBlockSize,
   9583         size_t minBlockCount,
   9584         size_t maxBlockCount,
   9585         VkDeviceSize bufferImageGranularity,
   9586         bool explicitBlockSize,
   9587         uint32_t algorithm,
   9588         float priority,
   9589         VkDeviceSize minAllocationAlignment,
   9590         void* pMemoryAllocateNext);
   9591     ~VmaBlockVector();
   9592 
   9593     VmaAllocator GetAllocator() const { return m_hAllocator; }
   9594     VmaPool GetParentPool() const { return m_hParentPool; }
   9595     bool IsCustomPool() const { return m_hParentPool != VMA_NULL; }
   9596     uint32_t GetMemoryTypeIndex() const { return m_MemoryTypeIndex; }
   9597     VkDeviceSize GetPreferredBlockSize() const { return m_PreferredBlockSize; }
   9598     VkDeviceSize GetBufferImageGranularity() const { return m_BufferImageGranularity; }
   9599     uint32_t GetAlgorithm() const { return m_Algorithm; }
   9600     bool HasExplicitBlockSize() const { return m_ExplicitBlockSize; }
   9601     float GetPriority() const { return m_Priority; }
   9602     const void* GetAllocationNextPtr() const { return m_pMemoryAllocateNext; }
   9603     // To be used only while the m_Mutex is locked. Used during defragmentation.
   9604     size_t GetBlockCount() const { return m_Blocks.size(); }
   9605     // To be used only while the m_Mutex is locked. Used during defragmentation.
   9606     VmaDeviceMemoryBlock* GetBlock(size_t index) const { return m_Blocks[index]; }
   9607     VMA_RW_MUTEX &GetMutex() { return m_Mutex; }
   9608 
   9609     VkResult CreateMinBlocks();
   9610     void AddStatistics(VmaStatistics& inoutStats);
   9611     void AddDetailedStatistics(VmaDetailedStatistics& inoutStats);
   9612     bool IsEmpty();
   9613     bool IsCorruptionDetectionEnabled() const;
   9614 
   9615     VkResult Allocate(
   9616         VkDeviceSize size,
   9617         VkDeviceSize alignment,
   9618         const VmaAllocationCreateInfo& createInfo,
   9619         VmaSuballocationType suballocType,
   9620         size_t allocationCount,
   9621         VmaAllocation* pAllocations);
   9622 
   9623     void Free(const VmaAllocation hAllocation);
   9624 
   9625 #if VMA_STATS_STRING_ENABLED
   9626     void PrintDetailedMap(class VmaJsonWriter& json);
   9627 #endif
   9628 
   9629     VkResult CheckCorruption();
   9630 
   9631 private:
   9632     const VmaAllocator m_hAllocator;
   9633     const VmaPool m_hParentPool;
   9634     const uint32_t m_MemoryTypeIndex;
   9635     const VkDeviceSize m_PreferredBlockSize;
   9636     const size_t m_MinBlockCount;
   9637     const size_t m_MaxBlockCount;
   9638     const VkDeviceSize m_BufferImageGranularity;
   9639     const bool m_ExplicitBlockSize;
   9640     const uint32_t m_Algorithm;
   9641     const float m_Priority;
   9642     const VkDeviceSize m_MinAllocationAlignment;
   9643 
   9644     void* const m_pMemoryAllocateNext;
   9645     VMA_RW_MUTEX m_Mutex;
   9646     // Incrementally sorted by sumFreeSize, ascending.
   9647     VmaVector<VmaDeviceMemoryBlock*, VmaStlAllocator<VmaDeviceMemoryBlock*>> m_Blocks;
   9648     uint32_t m_NextBlockId;
   9649     bool m_IncrementalSort = true;
   9650 
   9651     void SetIncrementalSort(bool val) { m_IncrementalSort = val; }
   9652 
   9653     VkDeviceSize CalcMaxBlockSize() const;
   9654     // Finds and removes given block from vector.
   9655     void Remove(VmaDeviceMemoryBlock* pBlock);
   9656     // Performs single step in sorting m_Blocks. They may not be fully sorted
   9657     // after this call.
   9658     void IncrementallySortBlocks();
   9659     void SortByFreeSize();
   9660 
   9661     VkResult AllocatePage(
   9662         VkDeviceSize size,
   9663         VkDeviceSize alignment,
   9664         const VmaAllocationCreateInfo& createInfo,
   9665         VmaSuballocationType suballocType,
   9666         VmaAllocation* pAllocation);
   9667 
   9668     VkResult AllocateFromBlock(
   9669         VmaDeviceMemoryBlock* pBlock,
   9670         VkDeviceSize size,
   9671         VkDeviceSize alignment,
   9672         VmaAllocationCreateFlags allocFlags,
   9673         void* pUserData,
   9674         VmaSuballocationType suballocType,
   9675         uint32_t strategy,
   9676         VmaAllocation* pAllocation);
   9677 
   9678     VkResult CommitAllocationRequest(
   9679         VmaAllocationRequest& allocRequest,
   9680         VmaDeviceMemoryBlock* pBlock,
   9681         VkDeviceSize alignment,
   9682         VmaAllocationCreateFlags allocFlags,
   9683         void* pUserData,
   9684         VmaSuballocationType suballocType,
   9685         VmaAllocation* pAllocation);
   9686 
   9687     VkResult CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex);
   9688     bool HasEmptyBlock();
   9689 };
   9690 #endif // _VMA_BLOCK_VECTOR
   9691 
   9692 #ifndef _VMA_DEFRAGMENTATION_CONTEXT
   9693 struct VmaDefragmentationContext_T
   9694 {
   9695     VMA_CLASS_NO_COPY_NO_MOVE(VmaDefragmentationContext_T)
   9696 public:
   9697     VmaDefragmentationContext_T(
   9698         VmaAllocator hAllocator,
   9699         const VmaDefragmentationInfo& info);
   9700     ~VmaDefragmentationContext_T();
   9701 
   9702     void GetStats(VmaDefragmentationStats& outStats) { outStats = m_GlobalStats; }
   9703 
   9704     VkResult DefragmentPassBegin(VmaDefragmentationPassMoveInfo& moveInfo);
   9705     VkResult DefragmentPassEnd(VmaDefragmentationPassMoveInfo& moveInfo);
   9706 
   9707 private:
   9708     // Max number of allocations to ignore due to size constraints before ending single pass
   9709     static const uint8_t MAX_ALLOCS_TO_IGNORE = 16;
   9710     enum class CounterStatus { Pass, Ignore, End };
   9711 
   9712     struct FragmentedBlock
   9713     {
   9714         uint32_t data;
   9715         VmaDeviceMemoryBlock* block;
   9716     };
   9717     struct StateBalanced
   9718     {
   9719         VkDeviceSize avgFreeSize = 0;
   9720         VkDeviceSize avgAllocSize = UINT64_MAX;
   9721     };
   9722     struct StateExtensive
   9723     {
   9724         enum class Operation : uint8_t
   9725         {
   9726             FindFreeBlockBuffer, FindFreeBlockTexture, FindFreeBlockAll,
   9727             MoveBuffers, MoveTextures, MoveAll,
   9728             Cleanup, Done
   9729         };
   9730 
   9731         Operation operation = Operation::FindFreeBlockTexture;
   9732         size_t firstFreeBlock = SIZE_MAX;
   9733     };
   9734     struct MoveAllocationData
   9735     {
   9736         VkDeviceSize size;
   9737         VkDeviceSize alignment;
   9738         VmaSuballocationType type;
   9739         VmaAllocationCreateFlags flags;
   9740         VmaDefragmentationMove move = {};
   9741     };
   9742 
   9743     const VkDeviceSize m_MaxPassBytes;
   9744     const uint32_t m_MaxPassAllocations;
   9745     const PFN_vmaCheckDefragmentationBreakFunction m_BreakCallback;
   9746     void* m_BreakCallbackUserData;
   9747 
   9748     VmaStlAllocator<VmaDefragmentationMove> m_MoveAllocator;
   9749     VmaVector<VmaDefragmentationMove, VmaStlAllocator<VmaDefragmentationMove>> m_Moves;
   9750 
   9751     uint8_t m_IgnoredAllocs = 0;
   9752     uint32_t m_Algorithm;
   9753     uint32_t m_BlockVectorCount;
   9754     VmaBlockVector* m_PoolBlockVector;
   9755     VmaBlockVector** m_pBlockVectors;
   9756     size_t m_ImmovableBlockCount = 0;
   9757     VmaDefragmentationStats m_GlobalStats = { 0 };
   9758     VmaDefragmentationStats m_PassStats = { 0 };
   9759     void* m_AlgorithmState = VMA_NULL;
   9760 
   9761     static MoveAllocationData GetMoveData(VmaAllocHandle handle, VmaBlockMetadata* metadata);
   9762     CounterStatus CheckCounters(VkDeviceSize bytes);
   9763     bool IncrementCounters(VkDeviceSize bytes);
   9764     bool ReallocWithinBlock(VmaBlockVector& vector, VmaDeviceMemoryBlock* block);
   9765     bool AllocInOtherBlock(size_t start, size_t end, MoveAllocationData& data, VmaBlockVector& vector);
   9766 
   9767     bool ComputeDefragmentation(VmaBlockVector& vector, size_t index);
   9768     bool ComputeDefragmentation_Fast(VmaBlockVector& vector);
   9769     bool ComputeDefragmentation_Balanced(VmaBlockVector& vector, size_t index, bool update);
   9770     bool ComputeDefragmentation_Full(VmaBlockVector& vector);
   9771     bool ComputeDefragmentation_Extensive(VmaBlockVector& vector, size_t index);
   9772 
   9773     void UpdateVectorStatistics(VmaBlockVector& vector, StateBalanced& state);
   9774     bool MoveDataToFreeBlocks(VmaSuballocationType currentType,
   9775         VmaBlockVector& vector, size_t firstFreeBlock,
   9776         bool& texturePresent, bool& bufferPresent, bool& otherPresent);
   9777 };
   9778 #endif // _VMA_DEFRAGMENTATION_CONTEXT
   9779 
   9780 #ifndef _VMA_POOL_T
   9781 struct VmaPool_T
   9782 {
   9783     friend struct VmaPoolListItemTraits;
   9784     VMA_CLASS_NO_COPY_NO_MOVE(VmaPool_T)
   9785 public:
   9786     VmaBlockVector m_BlockVector;
   9787     VmaDedicatedAllocationList m_DedicatedAllocations;
   9788 
   9789     VmaPool_T(
   9790         VmaAllocator hAllocator,
   9791         const VmaPoolCreateInfo& createInfo,
   9792         VkDeviceSize preferredBlockSize);
   9793     ~VmaPool_T();
   9794 
   9795     uint32_t GetId() const { return m_Id; }
   9796     void SetId(uint32_t id) { VMA_ASSERT(m_Id == 0); m_Id = id; }
   9797 
   9798     const char* GetName() const { return m_Name; }
   9799     void SetName(const char* pName);
   9800 
   9801 #if VMA_STATS_STRING_ENABLED
   9802     //void PrintDetailedMap(class VmaStringBuilder& sb);
   9803 #endif
   9804 
   9805 private:
   9806     uint32_t m_Id;
   9807     char* m_Name;
   9808     VmaPool_T* m_PrevPool = VMA_NULL;
   9809     VmaPool_T* m_NextPool = VMA_NULL;
   9810 };
   9811 
   9812 struct VmaPoolListItemTraits
   9813 {
   9814     typedef VmaPool_T ItemType;
   9815 
   9816     static ItemType* GetPrev(const ItemType* item) { return item->m_PrevPool; }
   9817     static ItemType* GetNext(const ItemType* item) { return item->m_NextPool; }
   9818     static ItemType*& AccessPrev(ItemType* item) { return item->m_PrevPool; }
   9819     static ItemType*& AccessNext(ItemType* item) { return item->m_NextPool; }
   9820 };
   9821 #endif // _VMA_POOL_T
   9822 
   9823 #ifndef _VMA_CURRENT_BUDGET_DATA
   9824 struct VmaCurrentBudgetData
   9825 {
   9826     VMA_CLASS_NO_COPY_NO_MOVE(VmaCurrentBudgetData)
   9827 public:
   9828 
   9829     VMA_ATOMIC_UINT32 m_BlockCount[VK_MAX_MEMORY_HEAPS];
   9830     VMA_ATOMIC_UINT32 m_AllocationCount[VK_MAX_MEMORY_HEAPS];
   9831     VMA_ATOMIC_UINT64 m_BlockBytes[VK_MAX_MEMORY_HEAPS];
   9832     VMA_ATOMIC_UINT64 m_AllocationBytes[VK_MAX_MEMORY_HEAPS];
   9833 
   9834 #if VMA_MEMORY_BUDGET
   9835     VMA_ATOMIC_UINT32 m_OperationsSinceBudgetFetch;
   9836     VMA_RW_MUTEX m_BudgetMutex;
   9837     uint64_t m_VulkanUsage[VK_MAX_MEMORY_HEAPS];
   9838     uint64_t m_VulkanBudget[VK_MAX_MEMORY_HEAPS];
   9839     uint64_t m_BlockBytesAtBudgetFetch[VK_MAX_MEMORY_HEAPS];
   9840 #endif // VMA_MEMORY_BUDGET
   9841 
   9842     VmaCurrentBudgetData();
   9843 
   9844     void AddAllocation(uint32_t heapIndex, VkDeviceSize allocationSize);
   9845     void RemoveAllocation(uint32_t heapIndex, VkDeviceSize allocationSize);
   9846 };
   9847 
   9848 #ifndef _VMA_CURRENT_BUDGET_DATA_FUNCTIONS
   9849 VmaCurrentBudgetData::VmaCurrentBudgetData()
   9850 {
   9851     for (uint32_t heapIndex = 0; heapIndex < VK_MAX_MEMORY_HEAPS; ++heapIndex)
   9852     {
   9853         m_BlockCount[heapIndex] = 0;
   9854         m_AllocationCount[heapIndex] = 0;
   9855         m_BlockBytes[heapIndex] = 0;
   9856         m_AllocationBytes[heapIndex] = 0;
   9857 #if VMA_MEMORY_BUDGET
   9858         m_VulkanUsage[heapIndex] = 0;
   9859         m_VulkanBudget[heapIndex] = 0;
   9860         m_BlockBytesAtBudgetFetch[heapIndex] = 0;
   9861 #endif
   9862     }
   9863 
   9864 #if VMA_MEMORY_BUDGET
   9865     m_OperationsSinceBudgetFetch = 0;
   9866 #endif
   9867 }
   9868 
   9869 void VmaCurrentBudgetData::AddAllocation(uint32_t heapIndex, VkDeviceSize allocationSize)
   9870 {
   9871     m_AllocationBytes[heapIndex] += allocationSize;
   9872     ++m_AllocationCount[heapIndex];
   9873 #if VMA_MEMORY_BUDGET
   9874     ++m_OperationsSinceBudgetFetch;
   9875 #endif
   9876 }
   9877 
   9878 void VmaCurrentBudgetData::RemoveAllocation(uint32_t heapIndex, VkDeviceSize allocationSize)
   9879 {
   9880     VMA_ASSERT(m_AllocationBytes[heapIndex] >= allocationSize);
   9881     m_AllocationBytes[heapIndex] -= allocationSize;
   9882     VMA_ASSERT(m_AllocationCount[heapIndex] > 0);
   9883     --m_AllocationCount[heapIndex];
   9884 #if VMA_MEMORY_BUDGET
   9885     ++m_OperationsSinceBudgetFetch;
   9886 #endif
   9887 }
   9888 #endif // _VMA_CURRENT_BUDGET_DATA_FUNCTIONS
   9889 #endif // _VMA_CURRENT_BUDGET_DATA
   9890 
   9891 #ifndef _VMA_ALLOCATION_OBJECT_ALLOCATOR
   9892 /*
   9893 Thread-safe wrapper over VmaPoolAllocator free list, for allocation of VmaAllocation_T objects.
   9894 */
   9895 class VmaAllocationObjectAllocator
   9896 {
   9897     VMA_CLASS_NO_COPY_NO_MOVE(VmaAllocationObjectAllocator)
   9898 public:
   9899     VmaAllocationObjectAllocator(const VkAllocationCallbacks* pAllocationCallbacks)
   9900         : m_Allocator(pAllocationCallbacks, 1024) {}
   9901 
   9902     template<typename... Types> VmaAllocation Allocate(Types&&... args);
   9903     void Free(VmaAllocation hAlloc);
   9904 
   9905 private:
   9906     VMA_MUTEX m_Mutex;
   9907     VmaPoolAllocator<VmaAllocation_T> m_Allocator;
   9908 };
   9909 
   9910 template<typename... Types>
   9911 VmaAllocation VmaAllocationObjectAllocator::Allocate(Types&&... args)
   9912 {
   9913     VmaMutexLock mutexLock(m_Mutex);
   9914     return m_Allocator.Alloc<Types...>(std::forward<Types>(args)...);
   9915 }
   9916 
   9917 void VmaAllocationObjectAllocator::Free(VmaAllocation hAlloc)
   9918 {
   9919     VmaMutexLock mutexLock(m_Mutex);
   9920     m_Allocator.Free(hAlloc);
   9921 }
   9922 #endif // _VMA_ALLOCATION_OBJECT_ALLOCATOR
   9923 
   9924 #ifndef _VMA_VIRTUAL_BLOCK_T
   9925 struct VmaVirtualBlock_T
   9926 {
   9927     VMA_CLASS_NO_COPY_NO_MOVE(VmaVirtualBlock_T)
   9928 public:
   9929     const bool m_AllocationCallbacksSpecified;
   9930     const VkAllocationCallbacks m_AllocationCallbacks;
   9931 
   9932     VmaVirtualBlock_T(const VmaVirtualBlockCreateInfo& createInfo);
   9933     ~VmaVirtualBlock_T();
   9934 
   9935     VkResult Init() { return VK_SUCCESS; }
   9936     bool IsEmpty() const { return m_Metadata->IsEmpty(); }
   9937     void Free(VmaVirtualAllocation allocation) { m_Metadata->Free((VmaAllocHandle)allocation); }
   9938     void SetAllocationUserData(VmaVirtualAllocation allocation, void* userData) { m_Metadata->SetAllocationUserData((VmaAllocHandle)allocation, userData); }
   9939     void Clear() { m_Metadata->Clear(); }
   9940 
   9941     const VkAllocationCallbacks* GetAllocationCallbacks() const;
   9942     void GetAllocationInfo(VmaVirtualAllocation allocation, VmaVirtualAllocationInfo& outInfo);
   9943     VkResult Allocate(const VmaVirtualAllocationCreateInfo& createInfo, VmaVirtualAllocation& outAllocation,
   9944         VkDeviceSize* outOffset);
   9945     void GetStatistics(VmaStatistics& outStats) const;
   9946     void CalculateDetailedStatistics(VmaDetailedStatistics& outStats) const;
   9947 #if VMA_STATS_STRING_ENABLED
   9948     void BuildStatsString(bool detailedMap, VmaStringBuilder& sb) const;
   9949 #endif
   9950 
   9951 private:
   9952     VmaBlockMetadata* m_Metadata;
   9953 };
   9954 
   9955 #ifndef _VMA_VIRTUAL_BLOCK_T_FUNCTIONS
   9956 VmaVirtualBlock_T::VmaVirtualBlock_T(const VmaVirtualBlockCreateInfo& createInfo)
   9957     : m_AllocationCallbacksSpecified(createInfo.pAllocationCallbacks != VMA_NULL),
   9958     m_AllocationCallbacks(createInfo.pAllocationCallbacks != VMA_NULL ? *createInfo.pAllocationCallbacks : VmaEmptyAllocationCallbacks)
   9959 {
   9960     const uint32_t algorithm = createInfo.flags & VMA_VIRTUAL_BLOCK_CREATE_ALGORITHM_MASK;
   9961     switch (algorithm)
   9962     {
   9963     case 0:
   9964         m_Metadata = vma_new(GetAllocationCallbacks(), VmaBlockMetadata_TLSF)(VK_NULL_HANDLE, 1, true);
   9965         break;
   9966     case VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT:
   9967         m_Metadata = vma_new(GetAllocationCallbacks(), VmaBlockMetadata_Linear)(VK_NULL_HANDLE, 1, true);
   9968         break;
   9969     default:
   9970         VMA_ASSERT(0);
   9971         m_Metadata = vma_new(GetAllocationCallbacks(), VmaBlockMetadata_TLSF)(VK_NULL_HANDLE, 1, true);
   9972     }
   9973 
   9974     m_Metadata->Init(createInfo.size);
   9975 }
   9976 
   9977 VmaVirtualBlock_T::~VmaVirtualBlock_T()
   9978 {
   9979     // Define macro VMA_DEBUG_LOG_FORMAT or more specialized VMA_LEAK_LOG_FORMAT
   9980     // to receive the list of the unfreed allocations.
   9981     if (!m_Metadata->IsEmpty())
   9982         m_Metadata->DebugLogAllAllocations();
   9983     // This is the most important assert in the entire library.
   9984     // Hitting it means you have some memory leak - unreleased virtual allocations.
   9985     VMA_ASSERT_LEAK(m_Metadata->IsEmpty() && "Some virtual allocations were not freed before destruction of this virtual block!");
   9986 
   9987     vma_delete(GetAllocationCallbacks(), m_Metadata);
   9988 }
   9989 
   9990 const VkAllocationCallbacks* VmaVirtualBlock_T::GetAllocationCallbacks() const
   9991 {
   9992     return m_AllocationCallbacksSpecified ? &m_AllocationCallbacks : VMA_NULL;
   9993 }
   9994 
   9995 void VmaVirtualBlock_T::GetAllocationInfo(VmaVirtualAllocation allocation, VmaVirtualAllocationInfo& outInfo)
   9996 {
   9997     m_Metadata->GetAllocationInfo((VmaAllocHandle)allocation, outInfo);
   9998 }
   9999 
  10000 VkResult VmaVirtualBlock_T::Allocate(const VmaVirtualAllocationCreateInfo& createInfo, VmaVirtualAllocation& outAllocation,
  10001     VkDeviceSize* outOffset)
  10002 {
  10003     VmaAllocationRequest request = {};
  10004     if (m_Metadata->CreateAllocationRequest(
  10005         createInfo.size, // allocSize
  10006         VMA_MAX(createInfo.alignment, (VkDeviceSize)1), // allocAlignment
  10007         (createInfo.flags & VMA_VIRTUAL_ALLOCATION_CREATE_UPPER_ADDRESS_BIT) != 0, // upperAddress
  10008         VMA_SUBALLOCATION_TYPE_UNKNOWN, // allocType - unimportant
  10009         createInfo.flags & VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MASK, // strategy
  10010         &request))
  10011     {
  10012         m_Metadata->Alloc(request,
  10013             VMA_SUBALLOCATION_TYPE_UNKNOWN, // type - unimportant
  10014             createInfo.pUserData);
  10015         outAllocation = (VmaVirtualAllocation)request.allocHandle;
  10016         if(outOffset)
  10017             *outOffset = m_Metadata->GetAllocationOffset(request.allocHandle);
  10018         return VK_SUCCESS;
  10019     }
  10020     outAllocation = (VmaVirtualAllocation)VK_NULL_HANDLE;
  10021     if (outOffset)
  10022         *outOffset = UINT64_MAX;
  10023     return VK_ERROR_OUT_OF_DEVICE_MEMORY;
  10024 }
  10025 
  10026 void VmaVirtualBlock_T::GetStatistics(VmaStatistics& outStats) const
  10027 {
  10028     VmaClearStatistics(outStats);
  10029     m_Metadata->AddStatistics(outStats);
  10030 }
  10031 
  10032 void VmaVirtualBlock_T::CalculateDetailedStatistics(VmaDetailedStatistics& outStats) const
  10033 {
  10034     VmaClearDetailedStatistics(outStats);
  10035     m_Metadata->AddDetailedStatistics(outStats);
  10036 }
  10037 
  10038 #if VMA_STATS_STRING_ENABLED
  10039 void VmaVirtualBlock_T::BuildStatsString(bool detailedMap, VmaStringBuilder& sb) const
  10040 {
  10041     VmaJsonWriter json(GetAllocationCallbacks(), sb);
  10042     json.BeginObject();
  10043 
  10044     VmaDetailedStatistics stats;
  10045     CalculateDetailedStatistics(stats);
  10046 
  10047     json.WriteString("Stats");
  10048     VmaPrintDetailedStatistics(json, stats);
  10049 
  10050     if (detailedMap)
  10051     {
  10052         json.WriteString("Details");
  10053         json.BeginObject();
  10054         m_Metadata->PrintDetailedMap(json);
  10055         json.EndObject();
  10056     }
  10057 
  10058     json.EndObject();
  10059 }
  10060 #endif // VMA_STATS_STRING_ENABLED
  10061 #endif // _VMA_VIRTUAL_BLOCK_T_FUNCTIONS
  10062 #endif // _VMA_VIRTUAL_BLOCK_T
  10063 
  10064 
  10065 // Main allocator object.
  10066 struct VmaAllocator_T
  10067 {
  10068     VMA_CLASS_NO_COPY_NO_MOVE(VmaAllocator_T)
  10069 public:
  10070     const bool m_UseMutex;
  10071     const uint32_t m_VulkanApiVersion;
  10072     bool m_UseKhrDedicatedAllocation; // Can be set only if m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0).
  10073     bool m_UseKhrBindMemory2; // Can be set only if m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0).
  10074     bool m_UseExtMemoryBudget;
  10075     bool m_UseAmdDeviceCoherentMemory;
  10076     bool m_UseKhrBufferDeviceAddress;
  10077     bool m_UseExtMemoryPriority;
  10078     bool m_UseKhrMaintenance4;
  10079     bool m_UseKhrMaintenance5;
  10080     const VkDevice m_hDevice;
  10081     const VkInstance m_hInstance;
  10082     const bool m_AllocationCallbacksSpecified;
  10083     const VkAllocationCallbacks m_AllocationCallbacks;
  10084     VmaDeviceMemoryCallbacks m_DeviceMemoryCallbacks;
  10085     VmaAllocationObjectAllocator m_AllocationObjectAllocator;
  10086 
  10087     // Each bit (1 << i) is set if HeapSizeLimit is enabled for that heap, so cannot allocate more than the heap size.
  10088     uint32_t m_HeapSizeLimitMask;
  10089 
  10090     VkPhysicalDeviceProperties m_PhysicalDeviceProperties;
  10091     VkPhysicalDeviceMemoryProperties m_MemProps;
  10092 
  10093     // Default pools.
  10094     VmaBlockVector* m_pBlockVectors[VK_MAX_MEMORY_TYPES];
  10095     VmaDedicatedAllocationList m_DedicatedAllocations[VK_MAX_MEMORY_TYPES];
  10096 
  10097     VmaCurrentBudgetData m_Budget;
  10098     VMA_ATOMIC_UINT32 m_DeviceMemoryCount; // Total number of VkDeviceMemory objects.
  10099 
  10100     VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo);
  10101     VkResult Init(const VmaAllocatorCreateInfo* pCreateInfo);
  10102     ~VmaAllocator_T();
  10103 
  10104     const VkAllocationCallbacks* GetAllocationCallbacks() const
  10105     {
  10106         return m_AllocationCallbacksSpecified ? &m_AllocationCallbacks : VMA_NULL;
  10107     }
  10108     const VmaVulkanFunctions& GetVulkanFunctions() const
  10109     {
  10110         return m_VulkanFunctions;
  10111     }
  10112 
  10113     VkPhysicalDevice GetPhysicalDevice() const { return m_PhysicalDevice; }
  10114 
  10115     VkDeviceSize GetBufferImageGranularity() const
  10116     {
  10117         return VMA_MAX(
  10118             static_cast<VkDeviceSize>(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY),
  10119             m_PhysicalDeviceProperties.limits.bufferImageGranularity);
  10120     }
  10121 
  10122     uint32_t GetMemoryHeapCount() const { return m_MemProps.memoryHeapCount; }
  10123     uint32_t GetMemoryTypeCount() const { return m_MemProps.memoryTypeCount; }
  10124 
  10125     uint32_t MemoryTypeIndexToHeapIndex(uint32_t memTypeIndex) const
  10126     {
  10127         VMA_ASSERT(memTypeIndex < m_MemProps.memoryTypeCount);
  10128         return m_MemProps.memoryTypes[memTypeIndex].heapIndex;
  10129     }
  10130     // True when specific memory type is HOST_VISIBLE but not HOST_COHERENT.
  10131     bool IsMemoryTypeNonCoherent(uint32_t memTypeIndex) const
  10132     {
  10133         return (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & (VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)) ==
  10134             VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
  10135     }
  10136     // Minimum alignment for all allocations in specific memory type.
  10137     VkDeviceSize GetMemoryTypeMinAlignment(uint32_t memTypeIndex) const
  10138     {
  10139         return IsMemoryTypeNonCoherent(memTypeIndex) ?
  10140             VMA_MAX((VkDeviceSize)VMA_MIN_ALIGNMENT, m_PhysicalDeviceProperties.limits.nonCoherentAtomSize) :
  10141             (VkDeviceSize)VMA_MIN_ALIGNMENT;
  10142     }
  10143 
  10144     bool IsIntegratedGpu() const
  10145     {
  10146         return m_PhysicalDeviceProperties.deviceType == VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU;
  10147     }
  10148 
  10149     uint32_t GetGlobalMemoryTypeBits() const { return m_GlobalMemoryTypeBits; }
  10150 
  10151     void GetBufferMemoryRequirements(
  10152         VkBuffer hBuffer,
  10153         VkMemoryRequirements& memReq,
  10154         bool& requiresDedicatedAllocation,
  10155         bool& prefersDedicatedAllocation) const;
  10156     void GetImageMemoryRequirements(
  10157         VkImage hImage,
  10158         VkMemoryRequirements& memReq,
  10159         bool& requiresDedicatedAllocation,
  10160         bool& prefersDedicatedAllocation) const;
  10161     VkResult FindMemoryTypeIndex(
  10162         uint32_t memoryTypeBits,
  10163         const VmaAllocationCreateInfo* pAllocationCreateInfo,
  10164         VmaBufferImageUsage bufImgUsage,
  10165         uint32_t* pMemoryTypeIndex) const;
  10166 
  10167     // Main allocation function.
  10168     VkResult AllocateMemory(
  10169         const VkMemoryRequirements& vkMemReq,
  10170         bool requiresDedicatedAllocation,
  10171         bool prefersDedicatedAllocation,
  10172         VkBuffer dedicatedBuffer,
  10173         VkImage dedicatedImage,
  10174         VmaBufferImageUsage dedicatedBufferImageUsage,
  10175         const VmaAllocationCreateInfo& createInfo,
  10176         VmaSuballocationType suballocType,
  10177         size_t allocationCount,
  10178         VmaAllocation* pAllocations);
  10179 
  10180     // Main deallocation function.
  10181     void FreeMemory(
  10182         size_t allocationCount,
  10183         const VmaAllocation* pAllocations);
  10184 
  10185     void CalculateStatistics(VmaTotalStatistics* pStats);
  10186 
  10187     void GetHeapBudgets(
  10188         VmaBudget* outBudgets, uint32_t firstHeap, uint32_t heapCount);
  10189 
  10190 #if VMA_STATS_STRING_ENABLED
  10191     void PrintDetailedMap(class VmaJsonWriter& json);
  10192 #endif
  10193 
  10194     void GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo);
  10195     void GetAllocationInfo2(VmaAllocation hAllocation, VmaAllocationInfo2* pAllocationInfo);
  10196 
  10197     VkResult CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool);
  10198     void DestroyPool(VmaPool pool);
  10199     void GetPoolStatistics(VmaPool pool, VmaStatistics* pPoolStats);
  10200     void CalculatePoolStatistics(VmaPool pool, VmaDetailedStatistics* pPoolStats);
  10201 
  10202     void SetCurrentFrameIndex(uint32_t frameIndex);
  10203     uint32_t GetCurrentFrameIndex() const { return m_CurrentFrameIndex.load(); }
  10204 
  10205     VkResult CheckPoolCorruption(VmaPool hPool);
  10206     VkResult CheckCorruption(uint32_t memoryTypeBits);
  10207 
  10208     // Call to Vulkan function vkAllocateMemory with accompanying bookkeeping.
  10209     VkResult AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory);
  10210     // Call to Vulkan function vkFreeMemory with accompanying bookkeeping.
  10211     void FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory);
  10212     // Call to Vulkan function vkBindBufferMemory or vkBindBufferMemory2KHR.
  10213     VkResult BindVulkanBuffer(
  10214         VkDeviceMemory memory,
  10215         VkDeviceSize memoryOffset,
  10216         VkBuffer buffer,
  10217         const void* pNext);
  10218     // Call to Vulkan function vkBindImageMemory or vkBindImageMemory2KHR.
  10219     VkResult BindVulkanImage(
  10220         VkDeviceMemory memory,
  10221         VkDeviceSize memoryOffset,
  10222         VkImage image,
  10223         const void* pNext);
  10224 
  10225     VkResult Map(VmaAllocation hAllocation, void** ppData);
  10226     void Unmap(VmaAllocation hAllocation);
  10227 
  10228     VkResult BindBufferMemory(
  10229         VmaAllocation hAllocation,
  10230         VkDeviceSize allocationLocalOffset,
  10231         VkBuffer hBuffer,
  10232         const void* pNext);
  10233     VkResult BindImageMemory(
  10234         VmaAllocation hAllocation,
  10235         VkDeviceSize allocationLocalOffset,
  10236         VkImage hImage,
  10237         const void* pNext);
  10238 
  10239     VkResult FlushOrInvalidateAllocation(
  10240         VmaAllocation hAllocation,
  10241         VkDeviceSize offset, VkDeviceSize size,
  10242         VMA_CACHE_OPERATION op);
  10243     VkResult FlushOrInvalidateAllocations(
  10244         uint32_t allocationCount,
  10245         const VmaAllocation* allocations,
  10246         const VkDeviceSize* offsets, const VkDeviceSize* sizes,
  10247         VMA_CACHE_OPERATION op);
  10248 
  10249     VkResult CopyMemoryToAllocation(
  10250         const void* pSrcHostPointer,
  10251         VmaAllocation dstAllocation,
  10252         VkDeviceSize dstAllocationLocalOffset,
  10253         VkDeviceSize size);
  10254     VkResult CopyAllocationToMemory(
  10255         VmaAllocation srcAllocation,
  10256         VkDeviceSize srcAllocationLocalOffset,
  10257         void* pDstHostPointer,
  10258         VkDeviceSize size);
  10259 
  10260     void FillAllocation(const VmaAllocation hAllocation, uint8_t pattern);
  10261 
  10262     /*
  10263     Returns bit mask of memory types that can support defragmentation on GPU as
  10264     they support creation of required buffer for copy operations.
  10265     */
  10266     uint32_t GetGpuDefragmentationMemoryTypeBits();
  10267 
  10268 #if VMA_EXTERNAL_MEMORY
  10269     VkExternalMemoryHandleTypeFlagsKHR GetExternalMemoryHandleTypeFlags(uint32_t memTypeIndex) const
  10270     {
  10271         return m_TypeExternalMemoryHandleTypes[memTypeIndex];
  10272     }
  10273 #endif // #if VMA_EXTERNAL_MEMORY
  10274 
  10275 private:
  10276     VkDeviceSize m_PreferredLargeHeapBlockSize;
  10277 
  10278     VkPhysicalDevice m_PhysicalDevice;
  10279     VMA_ATOMIC_UINT32 m_CurrentFrameIndex;
  10280     VMA_ATOMIC_UINT32 m_GpuDefragmentationMemoryTypeBits; // UINT32_MAX means uninitialized.
  10281 #if VMA_EXTERNAL_MEMORY
  10282     VkExternalMemoryHandleTypeFlagsKHR m_TypeExternalMemoryHandleTypes[VK_MAX_MEMORY_TYPES];
  10283 #endif // #if VMA_EXTERNAL_MEMORY
  10284 
  10285     VMA_RW_MUTEX m_PoolsMutex;
  10286     typedef VmaIntrusiveLinkedList<VmaPoolListItemTraits> PoolList;
  10287     // Protected by m_PoolsMutex.
  10288     PoolList m_Pools;
  10289     uint32_t m_NextPoolId;
  10290 
  10291     VmaVulkanFunctions m_VulkanFunctions;
  10292 
  10293     // Global bit mask AND-ed with any memoryTypeBits to disallow certain memory types.
  10294     uint32_t m_GlobalMemoryTypeBits;
  10295 
  10296     void ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions);
  10297 
  10298 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
  10299     void ImportVulkanFunctions_Static();
  10300 #endif
  10301 
  10302     void ImportVulkanFunctions_Custom(const VmaVulkanFunctions* pVulkanFunctions);
  10303 
  10304 #if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
  10305     void ImportVulkanFunctions_Dynamic();
  10306 #endif
  10307 
  10308     void ValidateVulkanFunctions();
  10309 
  10310     VkDeviceSize CalcPreferredBlockSize(uint32_t memTypeIndex);
  10311 
  10312     VkResult AllocateMemoryOfType(
  10313         VmaPool pool,
  10314         VkDeviceSize size,
  10315         VkDeviceSize alignment,
  10316         bool dedicatedPreferred,
  10317         VkBuffer dedicatedBuffer,
  10318         VkImage dedicatedImage,
  10319         VmaBufferImageUsage dedicatedBufferImageUsage,
  10320         const VmaAllocationCreateInfo& createInfo,
  10321         uint32_t memTypeIndex,
  10322         VmaSuballocationType suballocType,
  10323         VmaDedicatedAllocationList& dedicatedAllocations,
  10324         VmaBlockVector& blockVector,
  10325         size_t allocationCount,
  10326         VmaAllocation* pAllocations);
  10327 
  10328     // Helper function only to be used inside AllocateDedicatedMemory.
  10329     VkResult AllocateDedicatedMemoryPage(
  10330         VmaPool pool,
  10331         VkDeviceSize size,
  10332         VmaSuballocationType suballocType,
  10333         uint32_t memTypeIndex,
  10334         const VkMemoryAllocateInfo& allocInfo,
  10335         bool map,
  10336         bool isUserDataString,
  10337         bool isMappingAllowed,
  10338         void* pUserData,
  10339         VmaAllocation* pAllocation);
  10340 
  10341     // Allocates and registers new VkDeviceMemory specifically for dedicated allocations.
  10342     VkResult AllocateDedicatedMemory(
  10343         VmaPool pool,
  10344         VkDeviceSize size,
  10345         VmaSuballocationType suballocType,
  10346         VmaDedicatedAllocationList& dedicatedAllocations,
  10347         uint32_t memTypeIndex,
  10348         bool map,
  10349         bool isUserDataString,
  10350         bool isMappingAllowed,
  10351         bool canAliasMemory,
  10352         void* pUserData,
  10353         float priority,
  10354         VkBuffer dedicatedBuffer,
  10355         VkImage dedicatedImage,
  10356         VmaBufferImageUsage dedicatedBufferImageUsage,
  10357         size_t allocationCount,
  10358         VmaAllocation* pAllocations,
  10359         const void* pNextChain = VMA_NULL);
  10360 
  10361     void FreeDedicatedMemory(const VmaAllocation allocation);
  10362 
  10363     VkResult CalcMemTypeParams(
  10364         VmaAllocationCreateInfo& outCreateInfo,
  10365         uint32_t memTypeIndex,
  10366         VkDeviceSize size,
  10367         size_t allocationCount);
  10368     VkResult CalcAllocationParams(
  10369         VmaAllocationCreateInfo& outCreateInfo,
  10370         bool dedicatedRequired,
  10371         bool dedicatedPreferred);
  10372 
  10373     /*
  10374     Calculates and returns bit mask of memory types that can support defragmentation
  10375     on GPU as they support creation of required buffer for copy operations.
  10376     */
  10377     uint32_t CalculateGpuDefragmentationMemoryTypeBits() const;
  10378     uint32_t CalculateGlobalMemoryTypeBits() const;
  10379 
  10380     bool GetFlushOrInvalidateRange(
  10381         VmaAllocation allocation,
  10382         VkDeviceSize offset, VkDeviceSize size,
  10383         VkMappedMemoryRange& outRange) const;
  10384 
  10385 #if VMA_MEMORY_BUDGET
  10386     void UpdateVulkanBudget();
  10387 #endif // #if VMA_MEMORY_BUDGET
  10388 };
  10389 
  10390 
  10391 #ifndef _VMA_MEMORY_FUNCTIONS
  10392 static void* VmaMalloc(VmaAllocator hAllocator, size_t size, size_t alignment)
  10393 {
  10394     return VmaMalloc(&hAllocator->m_AllocationCallbacks, size, alignment);
  10395 }
  10396 
  10397 static void VmaFree(VmaAllocator hAllocator, void* ptr)
  10398 {
  10399     VmaFree(&hAllocator->m_AllocationCallbacks, ptr);
  10400 }
  10401 
  10402 template<typename T>
  10403 static T* VmaAllocate(VmaAllocator hAllocator)
  10404 {
  10405     return (T*)VmaMalloc(hAllocator, sizeof(T), VMA_ALIGN_OF(T));
  10406 }
  10407 
  10408 template<typename T>
  10409 static T* VmaAllocateArray(VmaAllocator hAllocator, size_t count)
  10410 {
  10411     return (T*)VmaMalloc(hAllocator, sizeof(T) * count, VMA_ALIGN_OF(T));
  10412 }
  10413 
  10414 template<typename T>
  10415 static void vma_delete(VmaAllocator hAllocator, T* ptr)
  10416 {
  10417     if(ptr != VMA_NULL)
  10418     {
  10419         ptr->~T();
  10420         VmaFree(hAllocator, ptr);
  10421     }
  10422 }
  10423 
  10424 template<typename T>
  10425 static void vma_delete_array(VmaAllocator hAllocator, T* ptr, size_t count)
  10426 {
  10427     if(ptr != VMA_NULL)
  10428     {
  10429         for(size_t i = count; i--; )
  10430             ptr[i].~T();
  10431         VmaFree(hAllocator, ptr);
  10432     }
  10433 }
  10434 #endif // _VMA_MEMORY_FUNCTIONS
  10435 
  10436 #ifndef _VMA_DEVICE_MEMORY_BLOCK_FUNCTIONS
  10437 VmaDeviceMemoryBlock::VmaDeviceMemoryBlock(VmaAllocator hAllocator)
  10438     : m_pMetadata(VMA_NULL),
  10439     m_MemoryTypeIndex(UINT32_MAX),
  10440     m_Id(0),
  10441     m_hMemory(VK_NULL_HANDLE),
  10442     m_MapCount(0),
  10443     m_pMappedData(VMA_NULL) {}
  10444 
  10445 VmaDeviceMemoryBlock::~VmaDeviceMemoryBlock()
  10446 {
  10447     VMA_ASSERT_LEAK(m_MapCount == 0 && "VkDeviceMemory block is being destroyed while it is still mapped.");
  10448     VMA_ASSERT_LEAK(m_hMemory == VK_NULL_HANDLE);
  10449 }
  10450 
  10451 void VmaDeviceMemoryBlock::Init(
  10452     VmaAllocator hAllocator,
  10453     VmaPool hParentPool,
  10454     uint32_t newMemoryTypeIndex,
  10455     VkDeviceMemory newMemory,
  10456     VkDeviceSize newSize,
  10457     uint32_t id,
  10458     uint32_t algorithm,
  10459     VkDeviceSize bufferImageGranularity)
  10460 {
  10461     VMA_ASSERT(m_hMemory == VK_NULL_HANDLE);
  10462 
  10463     m_hParentPool = hParentPool;
  10464     m_MemoryTypeIndex = newMemoryTypeIndex;
  10465     m_Id = id;
  10466     m_hMemory = newMemory;
  10467 
  10468     switch (algorithm)
  10469     {
  10470     case 0:
  10471         m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_TLSF)(hAllocator->GetAllocationCallbacks(),
  10472             bufferImageGranularity, false); // isVirtual
  10473         break;
  10474     case VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT:
  10475         m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_Linear)(hAllocator->GetAllocationCallbacks(),
  10476             bufferImageGranularity, false); // isVirtual
  10477         break;
  10478     default:
  10479         VMA_ASSERT(0);
  10480         m_pMetadata = vma_new(hAllocator, VmaBlockMetadata_TLSF)(hAllocator->GetAllocationCallbacks(),
  10481             bufferImageGranularity, false); // isVirtual
  10482     }
  10483     m_pMetadata->Init(newSize);
  10484 }
  10485 
  10486 void VmaDeviceMemoryBlock::Destroy(VmaAllocator allocator)
  10487 {
  10488     // Define macro VMA_DEBUG_LOG_FORMAT or more specialized VMA_LEAK_LOG_FORMAT
  10489     // to receive the list of the unfreed allocations.
  10490     if (!m_pMetadata->IsEmpty())
  10491         m_pMetadata->DebugLogAllAllocations();
  10492     // This is the most important assert in the entire library.
  10493     // Hitting it means you have some memory leak - unreleased VmaAllocation objects.
  10494     VMA_ASSERT_LEAK(m_pMetadata->IsEmpty() && "Some allocations were not freed before destruction of this memory block!");
  10495 
  10496     VMA_ASSERT_LEAK(m_hMemory != VK_NULL_HANDLE);
  10497     allocator->FreeVulkanMemory(m_MemoryTypeIndex, m_pMetadata->GetSize(), m_hMemory);
  10498     m_hMemory = VK_NULL_HANDLE;
  10499 
  10500     vma_delete(allocator, m_pMetadata);
  10501     m_pMetadata = VMA_NULL;
  10502 }
  10503 
  10504 void VmaDeviceMemoryBlock::PostAlloc(VmaAllocator hAllocator)
  10505 {
  10506     VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
  10507     m_MappingHysteresis.PostAlloc();
  10508 }
  10509 
  10510 void VmaDeviceMemoryBlock::PostFree(VmaAllocator hAllocator)
  10511 {
  10512     VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
  10513     if(m_MappingHysteresis.PostFree())
  10514     {
  10515         VMA_ASSERT(m_MappingHysteresis.GetExtraMapping() == 0);
  10516         if (m_MapCount == 0)
  10517         {
  10518             m_pMappedData = VMA_NULL;
  10519             (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, m_hMemory);
  10520         }
  10521     }
  10522 }
  10523 
  10524 bool VmaDeviceMemoryBlock::Validate() const
  10525 {
  10526     VMA_VALIDATE((m_hMemory != VK_NULL_HANDLE) &&
  10527         (m_pMetadata->GetSize() != 0));
  10528 
  10529     return m_pMetadata->Validate();
  10530 }
  10531 
  10532 VkResult VmaDeviceMemoryBlock::CheckCorruption(VmaAllocator hAllocator)
  10533 {
  10534     void* pData = VMA_NULL;
  10535     VkResult res = Map(hAllocator, 1, &pData);
  10536     if (res != VK_SUCCESS)
  10537     {
  10538         return res;
  10539     }
  10540 
  10541     res = m_pMetadata->CheckCorruption(pData);
  10542 
  10543     Unmap(hAllocator, 1);
  10544 
  10545     return res;
  10546 }
  10547 
  10548 VkResult VmaDeviceMemoryBlock::Map(VmaAllocator hAllocator, uint32_t count, void** ppData)
  10549 {
  10550     if (count == 0)
  10551     {
  10552         return VK_SUCCESS;
  10553     }
  10554 
  10555     VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
  10556     const uint32_t oldTotalMapCount = m_MapCount + m_MappingHysteresis.GetExtraMapping();
  10557     if (oldTotalMapCount != 0)
  10558     {
  10559         VMA_ASSERT(m_pMappedData != VMA_NULL);
  10560         m_MappingHysteresis.PostMap();
  10561         m_MapCount += count;
  10562         if (ppData != VMA_NULL)
  10563         {
  10564             *ppData = m_pMappedData;
  10565         }
  10566         return VK_SUCCESS;
  10567     }
  10568     else
  10569     {
  10570         VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
  10571             hAllocator->m_hDevice,
  10572             m_hMemory,
  10573             0, // offset
  10574             VK_WHOLE_SIZE,
  10575             0, // flags
  10576             &m_pMappedData);
  10577         if (result == VK_SUCCESS)
  10578         {
  10579             VMA_ASSERT(m_pMappedData != VMA_NULL);
  10580             m_MappingHysteresis.PostMap();
  10581             m_MapCount = count;
  10582             if (ppData != VMA_NULL)
  10583             {
  10584                 *ppData = m_pMappedData;
  10585             }
  10586         }
  10587         return result;
  10588     }
  10589 }
  10590 
  10591 void VmaDeviceMemoryBlock::Unmap(VmaAllocator hAllocator, uint32_t count)
  10592 {
  10593     if (count == 0)
  10594     {
  10595         return;
  10596     }
  10597 
  10598     VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
  10599     if (m_MapCount >= count)
  10600     {
  10601         m_MapCount -= count;
  10602         const uint32_t totalMapCount = m_MapCount + m_MappingHysteresis.GetExtraMapping();
  10603         if (totalMapCount == 0)
  10604         {
  10605             m_pMappedData = VMA_NULL;
  10606             (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(hAllocator->m_hDevice, m_hMemory);
  10607         }
  10608         m_MappingHysteresis.PostUnmap();
  10609     }
  10610     else
  10611     {
  10612         VMA_ASSERT(0 && "VkDeviceMemory block is being unmapped while it was not previously mapped.");
  10613     }
  10614 }
  10615 
  10616 VkResult VmaDeviceMemoryBlock::WriteMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
  10617 {
  10618     VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
  10619 
  10620     void* pData;
  10621     VkResult res = Map(hAllocator, 1, &pData);
  10622     if (res != VK_SUCCESS)
  10623     {
  10624         return res;
  10625     }
  10626 
  10627     VmaWriteMagicValue(pData, allocOffset + allocSize);
  10628 
  10629     Unmap(hAllocator, 1);
  10630     return VK_SUCCESS;
  10631 }
  10632 
  10633 VkResult VmaDeviceMemoryBlock::ValidateMagicValueAfterAllocation(VmaAllocator hAllocator, VkDeviceSize allocOffset, VkDeviceSize allocSize)
  10634 {
  10635     VMA_ASSERT(VMA_DEBUG_MARGIN > 0 && VMA_DEBUG_MARGIN % 4 == 0 && VMA_DEBUG_DETECT_CORRUPTION);
  10636 
  10637     void* pData;
  10638     VkResult res = Map(hAllocator, 1, &pData);
  10639     if (res != VK_SUCCESS)
  10640     {
  10641         return res;
  10642     }
  10643 
  10644     if (!VmaValidateMagicValue(pData, allocOffset + allocSize))
  10645     {
  10646         VMA_ASSERT(0 && "MEMORY CORRUPTION DETECTED AFTER FREED ALLOCATION!");
  10647     }
  10648 
  10649     Unmap(hAllocator, 1);
  10650     return VK_SUCCESS;
  10651 }
  10652 
  10653 VkResult VmaDeviceMemoryBlock::BindBufferMemory(
  10654     const VmaAllocator hAllocator,
  10655     const VmaAllocation hAllocation,
  10656     VkDeviceSize allocationLocalOffset,
  10657     VkBuffer hBuffer,
  10658     const void* pNext)
  10659 {
  10660     VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
  10661         hAllocation->GetBlock() == this);
  10662     VMA_ASSERT(allocationLocalOffset < hAllocation->GetSize() &&
  10663         "Invalid allocationLocalOffset. Did you forget that this offset is relative to the beginning of the allocation, not the whole memory block?");
  10664     const VkDeviceSize memoryOffset = hAllocation->GetOffset() + allocationLocalOffset;
  10665     // This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
  10666     VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
  10667     return hAllocator->BindVulkanBuffer(m_hMemory, memoryOffset, hBuffer, pNext);
  10668 }
  10669 
  10670 VkResult VmaDeviceMemoryBlock::BindImageMemory(
  10671     const VmaAllocator hAllocator,
  10672     const VmaAllocation hAllocation,
  10673     VkDeviceSize allocationLocalOffset,
  10674     VkImage hImage,
  10675     const void* pNext)
  10676 {
  10677     VMA_ASSERT(hAllocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_BLOCK &&
  10678         hAllocation->GetBlock() == this);
  10679     VMA_ASSERT(allocationLocalOffset < hAllocation->GetSize() &&
  10680         "Invalid allocationLocalOffset. Did you forget that this offset is relative to the beginning of the allocation, not the whole memory block?");
  10681     const VkDeviceSize memoryOffset = hAllocation->GetOffset() + allocationLocalOffset;
  10682     // This lock is important so that we don't call vkBind... and/or vkMap... simultaneously on the same VkDeviceMemory from multiple threads.
  10683     VmaMutexLock lock(m_MapAndBindMutex, hAllocator->m_UseMutex);
  10684     return hAllocator->BindVulkanImage(m_hMemory, memoryOffset, hImage, pNext);
  10685 }
  10686 #endif // _VMA_DEVICE_MEMORY_BLOCK_FUNCTIONS
  10687 
  10688 #ifndef _VMA_ALLOCATION_T_FUNCTIONS
  10689 VmaAllocation_T::VmaAllocation_T(bool mappingAllowed)
  10690     : m_Alignment{ 1 },
  10691     m_Size{ 0 },
  10692     m_pUserData{ VMA_NULL },
  10693     m_pName{ VMA_NULL },
  10694     m_MemoryTypeIndex{ 0 },
  10695     m_Type{ (uint8_t)ALLOCATION_TYPE_NONE },
  10696     m_SuballocationType{ (uint8_t)VMA_SUBALLOCATION_TYPE_UNKNOWN },
  10697     m_MapCount{ 0 },
  10698     m_Flags{ 0 }
  10699 {
  10700     if(mappingAllowed)
  10701         m_Flags |= (uint8_t)FLAG_MAPPING_ALLOWED;
  10702 }
  10703 
  10704 VmaAllocation_T::~VmaAllocation_T()
  10705 {
  10706     VMA_ASSERT_LEAK(m_MapCount == 0 && "Allocation was not unmapped before destruction.");
  10707 
  10708     // Check if owned string was freed.
  10709     VMA_ASSERT(m_pName == VMA_NULL);
  10710 }
  10711 
  10712 void VmaAllocation_T::InitBlockAllocation(
  10713     VmaDeviceMemoryBlock* block,
  10714     VmaAllocHandle allocHandle,
  10715     VkDeviceSize alignment,
  10716     VkDeviceSize size,
  10717     uint32_t memoryTypeIndex,
  10718     VmaSuballocationType suballocationType,
  10719     bool mapped)
  10720 {
  10721     VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
  10722     VMA_ASSERT(block != VMA_NULL);
  10723     m_Type = (uint8_t)ALLOCATION_TYPE_BLOCK;
  10724     m_Alignment = alignment;
  10725     m_Size = size;
  10726     m_MemoryTypeIndex = memoryTypeIndex;
  10727     if(mapped)
  10728     {
  10729         VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
  10730         m_Flags |= (uint8_t)FLAG_PERSISTENT_MAP;
  10731     }
  10732     m_SuballocationType = (uint8_t)suballocationType;
  10733     m_BlockAllocation.m_Block = block;
  10734     m_BlockAllocation.m_AllocHandle = allocHandle;
  10735 }
  10736 
  10737 void VmaAllocation_T::InitDedicatedAllocation(
  10738     VmaPool hParentPool,
  10739     uint32_t memoryTypeIndex,
  10740     VkDeviceMemory hMemory,
  10741     VmaSuballocationType suballocationType,
  10742     void* pMappedData,
  10743     VkDeviceSize size)
  10744 {
  10745     VMA_ASSERT(m_Type == ALLOCATION_TYPE_NONE);
  10746     VMA_ASSERT(hMemory != VK_NULL_HANDLE);
  10747     m_Type = (uint8_t)ALLOCATION_TYPE_DEDICATED;
  10748     m_Alignment = 0;
  10749     m_Size = size;
  10750     m_MemoryTypeIndex = memoryTypeIndex;
  10751     m_SuballocationType = (uint8_t)suballocationType;
  10752     if(pMappedData != VMA_NULL)
  10753     {
  10754         VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
  10755         m_Flags |= (uint8_t)FLAG_PERSISTENT_MAP;
  10756     }
  10757     m_DedicatedAllocation.m_hParentPool = hParentPool;
  10758     m_DedicatedAllocation.m_hMemory = hMemory;
  10759     m_DedicatedAllocation.m_pMappedData = pMappedData;
  10760     m_DedicatedAllocation.m_Prev = VMA_NULL;
  10761     m_DedicatedAllocation.m_Next = VMA_NULL;
  10762 }
  10763 
  10764 void VmaAllocation_T::SetName(VmaAllocator hAllocator, const char* pName)
  10765 {
  10766     VMA_ASSERT(pName == VMA_NULL || pName != m_pName);
  10767 
  10768     FreeName(hAllocator);
  10769 
  10770     if (pName != VMA_NULL)
  10771         m_pName = VmaCreateStringCopy(hAllocator->GetAllocationCallbacks(), pName);
  10772 }
  10773 
  10774 uint8_t VmaAllocation_T::SwapBlockAllocation(VmaAllocator hAllocator, VmaAllocation allocation)
  10775 {
  10776     VMA_ASSERT(allocation != VMA_NULL);
  10777     VMA_ASSERT(m_Type == ALLOCATION_TYPE_BLOCK);
  10778     VMA_ASSERT(allocation->m_Type == ALLOCATION_TYPE_BLOCK);
  10779 
  10780     if (m_MapCount != 0)
  10781         m_BlockAllocation.m_Block->Unmap(hAllocator, m_MapCount);
  10782 
  10783     m_BlockAllocation.m_Block->m_pMetadata->SetAllocationUserData(m_BlockAllocation.m_AllocHandle, allocation);
  10784     std::swap(m_BlockAllocation, allocation->m_BlockAllocation);
  10785     m_BlockAllocation.m_Block->m_pMetadata->SetAllocationUserData(m_BlockAllocation.m_AllocHandle, this);
  10786 
  10787 #if VMA_STATS_STRING_ENABLED
  10788     std::swap(m_BufferImageUsage, allocation->m_BufferImageUsage);
  10789 #endif
  10790     return m_MapCount;
  10791 }
  10792 
  10793 VmaAllocHandle VmaAllocation_T::GetAllocHandle() const
  10794 {
  10795     switch (m_Type)
  10796     {
  10797     case ALLOCATION_TYPE_BLOCK:
  10798         return m_BlockAllocation.m_AllocHandle;
  10799     case ALLOCATION_TYPE_DEDICATED:
  10800         return VK_NULL_HANDLE;
  10801     default:
  10802         VMA_ASSERT(0);
  10803         return VK_NULL_HANDLE;
  10804     }
  10805 }
  10806 
  10807 VkDeviceSize VmaAllocation_T::GetOffset() const
  10808 {
  10809     switch (m_Type)
  10810     {
  10811     case ALLOCATION_TYPE_BLOCK:
  10812         return m_BlockAllocation.m_Block->m_pMetadata->GetAllocationOffset(m_BlockAllocation.m_AllocHandle);
  10813     case ALLOCATION_TYPE_DEDICATED:
  10814         return 0;
  10815     default:
  10816         VMA_ASSERT(0);
  10817         return 0;
  10818     }
  10819 }
  10820 
  10821 VmaPool VmaAllocation_T::GetParentPool() const
  10822 {
  10823     switch (m_Type)
  10824     {
  10825     case ALLOCATION_TYPE_BLOCK:
  10826         return m_BlockAllocation.m_Block->GetParentPool();
  10827     case ALLOCATION_TYPE_DEDICATED:
  10828         return m_DedicatedAllocation.m_hParentPool;
  10829     default:
  10830         VMA_ASSERT(0);
  10831         return VK_NULL_HANDLE;
  10832     }
  10833 }
  10834 
  10835 VkDeviceMemory VmaAllocation_T::GetMemory() const
  10836 {
  10837     switch (m_Type)
  10838     {
  10839     case ALLOCATION_TYPE_BLOCK:
  10840         return m_BlockAllocation.m_Block->GetDeviceMemory();
  10841     case ALLOCATION_TYPE_DEDICATED:
  10842         return m_DedicatedAllocation.m_hMemory;
  10843     default:
  10844         VMA_ASSERT(0);
  10845         return VK_NULL_HANDLE;
  10846     }
  10847 }
  10848 
  10849 void* VmaAllocation_T::GetMappedData() const
  10850 {
  10851     switch (m_Type)
  10852     {
  10853     case ALLOCATION_TYPE_BLOCK:
  10854         if (m_MapCount != 0 || IsPersistentMap())
  10855         {
  10856             void* pBlockData = m_BlockAllocation.m_Block->GetMappedData();
  10857             VMA_ASSERT(pBlockData != VMA_NULL);
  10858             return (char*)pBlockData + GetOffset();
  10859         }
  10860         else
  10861         {
  10862             return VMA_NULL;
  10863         }
  10864         break;
  10865     case ALLOCATION_TYPE_DEDICATED:
  10866         VMA_ASSERT((m_DedicatedAllocation.m_pMappedData != VMA_NULL) == (m_MapCount != 0 || IsPersistentMap()));
  10867         return m_DedicatedAllocation.m_pMappedData;
  10868     default:
  10869         VMA_ASSERT(0);
  10870         return VMA_NULL;
  10871     }
  10872 }
  10873 
  10874 void VmaAllocation_T::BlockAllocMap()
  10875 {
  10876     VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
  10877     VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
  10878 
  10879     if (m_MapCount < 0xFF)
  10880     {
  10881         ++m_MapCount;
  10882     }
  10883     else
  10884     {
  10885         VMA_ASSERT(0 && "Allocation mapped too many times simultaneously.");
  10886     }
  10887 }
  10888 
  10889 void VmaAllocation_T::BlockAllocUnmap()
  10890 {
  10891     VMA_ASSERT(GetType() == ALLOCATION_TYPE_BLOCK);
  10892 
  10893     if (m_MapCount > 0)
  10894     {
  10895         --m_MapCount;
  10896     }
  10897     else
  10898     {
  10899         VMA_ASSERT(0 && "Unmapping allocation not previously mapped.");
  10900     }
  10901 }
  10902 
  10903 VkResult VmaAllocation_T::DedicatedAllocMap(VmaAllocator hAllocator, void** ppData)
  10904 {
  10905     VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
  10906     VMA_ASSERT(IsMappingAllowed() && "Mapping is not allowed on this allocation! Please use one of the new VMA_ALLOCATION_CREATE_HOST_ACCESS_* flags when creating it.");
  10907 
  10908     if (m_MapCount != 0 || IsPersistentMap())
  10909     {
  10910         if (m_MapCount < 0xFF)
  10911         {
  10912             VMA_ASSERT(m_DedicatedAllocation.m_pMappedData != VMA_NULL);
  10913             *ppData = m_DedicatedAllocation.m_pMappedData;
  10914             ++m_MapCount;
  10915             return VK_SUCCESS;
  10916         }
  10917         else
  10918         {
  10919             VMA_ASSERT(0 && "Dedicated allocation mapped too many times simultaneously.");
  10920             return VK_ERROR_MEMORY_MAP_FAILED;
  10921         }
  10922     }
  10923     else
  10924     {
  10925         VkResult result = (*hAllocator->GetVulkanFunctions().vkMapMemory)(
  10926             hAllocator->m_hDevice,
  10927             m_DedicatedAllocation.m_hMemory,
  10928             0, // offset
  10929             VK_WHOLE_SIZE,
  10930             0, // flags
  10931             ppData);
  10932         if (result == VK_SUCCESS)
  10933         {
  10934             m_DedicatedAllocation.m_pMappedData = *ppData;
  10935             m_MapCount = 1;
  10936         }
  10937         return result;
  10938     }
  10939 }
  10940 
  10941 void VmaAllocation_T::DedicatedAllocUnmap(VmaAllocator hAllocator)
  10942 {
  10943     VMA_ASSERT(GetType() == ALLOCATION_TYPE_DEDICATED);
  10944 
  10945     if (m_MapCount > 0)
  10946     {
  10947         --m_MapCount;
  10948         if (m_MapCount == 0 && !IsPersistentMap())
  10949         {
  10950             m_DedicatedAllocation.m_pMappedData = VMA_NULL;
  10951             (*hAllocator->GetVulkanFunctions().vkUnmapMemory)(
  10952                 hAllocator->m_hDevice,
  10953                 m_DedicatedAllocation.m_hMemory);
  10954         }
  10955     }
  10956     else
  10957     {
  10958         VMA_ASSERT(0 && "Unmapping dedicated allocation not previously mapped.");
  10959     }
  10960 }
  10961 
  10962 #if VMA_STATS_STRING_ENABLED
  10963 void VmaAllocation_T::PrintParameters(class VmaJsonWriter& json) const
  10964 {
  10965     json.WriteString("Type");
  10966     json.WriteString(VMA_SUBALLOCATION_TYPE_NAMES[m_SuballocationType]);
  10967 
  10968     json.WriteString("Size");
  10969     json.WriteNumber(m_Size);
  10970     json.WriteString("Usage");
  10971     json.WriteNumber(m_BufferImageUsage.Value); // It may be uint32_t or uint64_t.
  10972 
  10973     if (m_pUserData != VMA_NULL)
  10974     {
  10975         json.WriteString("CustomData");
  10976         json.BeginString();
  10977         json.ContinueString_Pointer(m_pUserData);
  10978         json.EndString();
  10979     }
  10980     if (m_pName != VMA_NULL)
  10981     {
  10982         json.WriteString("Name");
  10983         json.WriteString(m_pName);
  10984     }
  10985 }
  10986 #endif // VMA_STATS_STRING_ENABLED
  10987 
  10988 void VmaAllocation_T::FreeName(VmaAllocator hAllocator)
  10989 {
  10990     if(m_pName)
  10991     {
  10992         VmaFreeString(hAllocator->GetAllocationCallbacks(), m_pName);
  10993         m_pName = VMA_NULL;
  10994     }
  10995 }
  10996 #endif // _VMA_ALLOCATION_T_FUNCTIONS
  10997 
  10998 #ifndef _VMA_BLOCK_VECTOR_FUNCTIONS
  10999 VmaBlockVector::VmaBlockVector(
  11000     VmaAllocator hAllocator,
  11001     VmaPool hParentPool,
  11002     uint32_t memoryTypeIndex,
  11003     VkDeviceSize preferredBlockSize,
  11004     size_t minBlockCount,
  11005     size_t maxBlockCount,
  11006     VkDeviceSize bufferImageGranularity,
  11007     bool explicitBlockSize,
  11008     uint32_t algorithm,
  11009     float priority,
  11010     VkDeviceSize minAllocationAlignment,
  11011     void* pMemoryAllocateNext)
  11012     : m_hAllocator(hAllocator),
  11013     m_hParentPool(hParentPool),
  11014     m_MemoryTypeIndex(memoryTypeIndex),
  11015     m_PreferredBlockSize(preferredBlockSize),
  11016     m_MinBlockCount(minBlockCount),
  11017     m_MaxBlockCount(maxBlockCount),
  11018     m_BufferImageGranularity(bufferImageGranularity),
  11019     m_ExplicitBlockSize(explicitBlockSize),
  11020     m_Algorithm(algorithm),
  11021     m_Priority(priority),
  11022     m_MinAllocationAlignment(minAllocationAlignment),
  11023     m_pMemoryAllocateNext(pMemoryAllocateNext),
  11024     m_Blocks(VmaStlAllocator<VmaDeviceMemoryBlock*>(hAllocator->GetAllocationCallbacks())),
  11025     m_NextBlockId(0) {}
  11026 
  11027 VmaBlockVector::~VmaBlockVector()
  11028 {
  11029     for (size_t i = m_Blocks.size(); i--; )
  11030     {
  11031         m_Blocks[i]->Destroy(m_hAllocator);
  11032         vma_delete(m_hAllocator, m_Blocks[i]);
  11033     }
  11034 }
  11035 
  11036 VkResult VmaBlockVector::CreateMinBlocks()
  11037 {
  11038     for (size_t i = 0; i < m_MinBlockCount; ++i)
  11039     {
  11040         VkResult res = CreateBlock(m_PreferredBlockSize, VMA_NULL);
  11041         if (res != VK_SUCCESS)
  11042         {
  11043             return res;
  11044         }
  11045     }
  11046     return VK_SUCCESS;
  11047 }
  11048 
  11049 void VmaBlockVector::AddStatistics(VmaStatistics& inoutStats)
  11050 {
  11051     VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
  11052 
  11053     const size_t blockCount = m_Blocks.size();
  11054     for (uint32_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
  11055     {
  11056         const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
  11057         VMA_ASSERT(pBlock);
  11058         VMA_HEAVY_ASSERT(pBlock->Validate());
  11059         pBlock->m_pMetadata->AddStatistics(inoutStats);
  11060     }
  11061 }
  11062 
  11063 void VmaBlockVector::AddDetailedStatistics(VmaDetailedStatistics& inoutStats)
  11064 {
  11065     VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
  11066 
  11067     const size_t blockCount = m_Blocks.size();
  11068     for (uint32_t blockIndex = 0; blockIndex < blockCount; ++blockIndex)
  11069     {
  11070         const VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
  11071         VMA_ASSERT(pBlock);
  11072         VMA_HEAVY_ASSERT(pBlock->Validate());
  11073         pBlock->m_pMetadata->AddDetailedStatistics(inoutStats);
  11074     }
  11075 }
  11076 
  11077 bool VmaBlockVector::IsEmpty()
  11078 {
  11079     VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
  11080     return m_Blocks.empty();
  11081 }
  11082 
  11083 bool VmaBlockVector::IsCorruptionDetectionEnabled() const
  11084 {
  11085     const uint32_t requiredMemFlags = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT;
  11086     return (VMA_DEBUG_DETECT_CORRUPTION != 0) &&
  11087         (VMA_DEBUG_MARGIN > 0) &&
  11088         (m_Algorithm == 0 || m_Algorithm == VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT) &&
  11089         (m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags & requiredMemFlags) == requiredMemFlags;
  11090 }
  11091 
  11092 VkResult VmaBlockVector::Allocate(
  11093     VkDeviceSize size,
  11094     VkDeviceSize alignment,
  11095     const VmaAllocationCreateInfo& createInfo,
  11096     VmaSuballocationType suballocType,
  11097     size_t allocationCount,
  11098     VmaAllocation* pAllocations)
  11099 {
  11100     size_t allocIndex;
  11101     VkResult res = VK_SUCCESS;
  11102 
  11103     alignment = VMA_MAX(alignment, m_MinAllocationAlignment);
  11104 
  11105     if (IsCorruptionDetectionEnabled())
  11106     {
  11107         size = VmaAlignUp<VkDeviceSize>(size, sizeof(VMA_CORRUPTION_DETECTION_MAGIC_VALUE));
  11108         alignment = VmaAlignUp<VkDeviceSize>(alignment, sizeof(VMA_CORRUPTION_DETECTION_MAGIC_VALUE));
  11109     }
  11110 
  11111     {
  11112         VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
  11113         for (allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
  11114         {
  11115             res = AllocatePage(
  11116                 size,
  11117                 alignment,
  11118                 createInfo,
  11119                 suballocType,
  11120                 pAllocations + allocIndex);
  11121             if (res != VK_SUCCESS)
  11122             {
  11123                 break;
  11124             }
  11125         }
  11126     }
  11127 
  11128     if (res != VK_SUCCESS)
  11129     {
  11130         // Free all already created allocations.
  11131         while (allocIndex--)
  11132             Free(pAllocations[allocIndex]);
  11133         memset(pAllocations, 0, sizeof(VmaAllocation) * allocationCount);
  11134     }
  11135 
  11136     return res;
  11137 }
  11138 
  11139 VkResult VmaBlockVector::AllocatePage(
  11140     VkDeviceSize size,
  11141     VkDeviceSize alignment,
  11142     const VmaAllocationCreateInfo& createInfo,
  11143     VmaSuballocationType suballocType,
  11144     VmaAllocation* pAllocation)
  11145 {
  11146     const bool isUpperAddress = (createInfo.flags & VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT) != 0;
  11147 
  11148     VkDeviceSize freeMemory;
  11149     {
  11150         const uint32_t heapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex);
  11151         VmaBudget heapBudget = {};
  11152         m_hAllocator->GetHeapBudgets(&heapBudget, heapIndex, 1);
  11153         freeMemory = (heapBudget.usage < heapBudget.budget) ? (heapBudget.budget - heapBudget.usage) : 0;
  11154     }
  11155 
  11156     const bool canFallbackToDedicated = !HasExplicitBlockSize() &&
  11157         (createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0;
  11158     const bool canCreateNewBlock =
  11159         ((createInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0) &&
  11160         (m_Blocks.size() < m_MaxBlockCount) &&
  11161         (freeMemory >= size || !canFallbackToDedicated);
  11162     uint32_t strategy = createInfo.flags & VMA_ALLOCATION_CREATE_STRATEGY_MASK;
  11163 
  11164     // Upper address can only be used with linear allocator and within single memory block.
  11165     if (isUpperAddress &&
  11166         (m_Algorithm != VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT || m_MaxBlockCount > 1))
  11167     {
  11168         return VK_ERROR_FEATURE_NOT_PRESENT;
  11169     }
  11170 
  11171     // Early reject: requested allocation size is larger that maximum block size for this block vector.
  11172     if (size + VMA_DEBUG_MARGIN > m_PreferredBlockSize)
  11173     {
  11174         return VK_ERROR_OUT_OF_DEVICE_MEMORY;
  11175     }
  11176 
  11177     // 1. Search existing allocations. Try to allocate.
  11178     if (m_Algorithm == VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT)
  11179     {
  11180         // Use only last block.
  11181         if (!m_Blocks.empty())
  11182         {
  11183             VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks.back();
  11184             VMA_ASSERT(pCurrBlock);
  11185             VkResult res = AllocateFromBlock(
  11186                 pCurrBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
  11187             if (res == VK_SUCCESS)
  11188             {
  11189                 VMA_DEBUG_LOG_FORMAT("    Returned from last block #%" PRIu32, pCurrBlock->GetId());
  11190                 IncrementallySortBlocks();
  11191                 return VK_SUCCESS;
  11192             }
  11193         }
  11194     }
  11195     else
  11196     {
  11197         if (strategy != VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT) // MIN_MEMORY or default
  11198         {
  11199             const bool isHostVisible =
  11200                 (m_hAllocator->m_MemProps.memoryTypes[m_MemoryTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0;
  11201             if(isHostVisible)
  11202             {
  11203                 const bool isMappingAllowed = (createInfo.flags &
  11204                     (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0;
  11205                 /*
  11206                 For non-mappable allocations, check blocks that are not mapped first.
  11207                 For mappable allocations, check blocks that are already mapped first.
  11208                 This way, having many blocks, we will separate mappable and non-mappable allocations,
  11209                 hopefully limiting the number of blocks that are mapped, which will help tools like RenderDoc.
  11210                 */
  11211                 for(size_t mappingI = 0; mappingI < 2; ++mappingI)
  11212                 {
  11213                     // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
  11214                     for (size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
  11215                     {
  11216                         VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
  11217                         VMA_ASSERT(pCurrBlock);
  11218                         const bool isBlockMapped = pCurrBlock->GetMappedData() != VMA_NULL;
  11219                         if((mappingI == 0) == (isMappingAllowed == isBlockMapped))
  11220                         {
  11221                             VkResult res = AllocateFromBlock(
  11222                                 pCurrBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
  11223                             if (res == VK_SUCCESS)
  11224                             {
  11225                                 VMA_DEBUG_LOG_FORMAT("    Returned from existing block #%" PRIu32, pCurrBlock->GetId());
  11226                                 IncrementallySortBlocks();
  11227                                 return VK_SUCCESS;
  11228                             }
  11229                         }
  11230                     }
  11231                 }
  11232             }
  11233             else
  11234             {
  11235                 // Forward order in m_Blocks - prefer blocks with smallest amount of free space.
  11236                 for (size_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
  11237                 {
  11238                     VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
  11239                     VMA_ASSERT(pCurrBlock);
  11240                     VkResult res = AllocateFromBlock(
  11241                         pCurrBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
  11242                     if (res == VK_SUCCESS)
  11243                     {
  11244                         VMA_DEBUG_LOG_FORMAT("    Returned from existing block #%" PRIu32, pCurrBlock->GetId());
  11245                         IncrementallySortBlocks();
  11246                         return VK_SUCCESS;
  11247                     }
  11248                 }
  11249             }
  11250         }
  11251         else // VMA_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT
  11252         {
  11253             // Backward order in m_Blocks - prefer blocks with largest amount of free space.
  11254             for (size_t blockIndex = m_Blocks.size(); blockIndex--; )
  11255             {
  11256                 VmaDeviceMemoryBlock* const pCurrBlock = m_Blocks[blockIndex];
  11257                 VMA_ASSERT(pCurrBlock);
  11258                 VkResult res = AllocateFromBlock(pCurrBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
  11259                 if (res == VK_SUCCESS)
  11260                 {
  11261                     VMA_DEBUG_LOG_FORMAT("    Returned from existing block #%" PRIu32, pCurrBlock->GetId());
  11262                     IncrementallySortBlocks();
  11263                     return VK_SUCCESS;
  11264                 }
  11265             }
  11266         }
  11267     }
  11268 
  11269     // 2. Try to create new block.
  11270     if (canCreateNewBlock)
  11271     {
  11272         // Calculate optimal size for new block.
  11273         VkDeviceSize newBlockSize = m_PreferredBlockSize;
  11274         uint32_t newBlockSizeShift = 0;
  11275         const uint32_t NEW_BLOCK_SIZE_SHIFT_MAX = 3;
  11276 
  11277         if (!m_ExplicitBlockSize)
  11278         {
  11279             // Allocate 1/8, 1/4, 1/2 as first blocks.
  11280             const VkDeviceSize maxExistingBlockSize = CalcMaxBlockSize();
  11281             for (uint32_t i = 0; i < NEW_BLOCK_SIZE_SHIFT_MAX; ++i)
  11282             {
  11283                 const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
  11284                 if (smallerNewBlockSize > maxExistingBlockSize && smallerNewBlockSize >= size * 2)
  11285                 {
  11286                     newBlockSize = smallerNewBlockSize;
  11287                     ++newBlockSizeShift;
  11288                 }
  11289                 else
  11290                 {
  11291                     break;
  11292                 }
  11293             }
  11294         }
  11295 
  11296         size_t newBlockIndex = 0;
  11297         VkResult res = (newBlockSize <= freeMemory || !canFallbackToDedicated) ?
  11298             CreateBlock(newBlockSize, &newBlockIndex) : VK_ERROR_OUT_OF_DEVICE_MEMORY;
  11299         // Allocation of this size failed? Try 1/2, 1/4, 1/8 of m_PreferredBlockSize.
  11300         if (!m_ExplicitBlockSize)
  11301         {
  11302             while (res < 0 && newBlockSizeShift < NEW_BLOCK_SIZE_SHIFT_MAX)
  11303             {
  11304                 const VkDeviceSize smallerNewBlockSize = newBlockSize / 2;
  11305                 if (smallerNewBlockSize >= size)
  11306                 {
  11307                     newBlockSize = smallerNewBlockSize;
  11308                     ++newBlockSizeShift;
  11309                     res = (newBlockSize <= freeMemory || !canFallbackToDedicated) ?
  11310                         CreateBlock(newBlockSize, &newBlockIndex) : VK_ERROR_OUT_OF_DEVICE_MEMORY;
  11311                 }
  11312                 else
  11313                 {
  11314                     break;
  11315                 }
  11316             }
  11317         }
  11318 
  11319         if (res == VK_SUCCESS)
  11320         {
  11321             VmaDeviceMemoryBlock* const pBlock = m_Blocks[newBlockIndex];
  11322             VMA_ASSERT(pBlock->m_pMetadata->GetSize() >= size);
  11323 
  11324             res = AllocateFromBlock(
  11325                 pBlock, size, alignment, createInfo.flags, createInfo.pUserData, suballocType, strategy, pAllocation);
  11326             if (res == VK_SUCCESS)
  11327             {
  11328                 VMA_DEBUG_LOG_FORMAT("    Created new block #%" PRIu32 " Size=%" PRIu64, pBlock->GetId(), newBlockSize);
  11329                 IncrementallySortBlocks();
  11330                 return VK_SUCCESS;
  11331             }
  11332             else
  11333             {
  11334                 // Allocation from new block failed, possibly due to VMA_DEBUG_MARGIN or alignment.
  11335                 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
  11336             }
  11337         }
  11338     }
  11339 
  11340     return VK_ERROR_OUT_OF_DEVICE_MEMORY;
  11341 }
  11342 
  11343 void VmaBlockVector::Free(const VmaAllocation hAllocation)
  11344 {
  11345     VmaDeviceMemoryBlock* pBlockToDelete = VMA_NULL;
  11346 
  11347     bool budgetExceeded = false;
  11348     {
  11349         const uint32_t heapIndex = m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex);
  11350         VmaBudget heapBudget = {};
  11351         m_hAllocator->GetHeapBudgets(&heapBudget, heapIndex, 1);
  11352         budgetExceeded = heapBudget.usage >= heapBudget.budget;
  11353     }
  11354 
  11355     // Scope for lock.
  11356     {
  11357         VmaMutexLockWrite lock(m_Mutex, m_hAllocator->m_UseMutex);
  11358 
  11359         VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
  11360 
  11361         if (IsCorruptionDetectionEnabled())
  11362         {
  11363             VkResult res = pBlock->ValidateMagicValueAfterAllocation(m_hAllocator, hAllocation->GetOffset(), hAllocation->GetSize());
  11364             VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to validate magic value.");
  11365         }
  11366 
  11367         if (hAllocation->IsPersistentMap())
  11368         {
  11369             pBlock->Unmap(m_hAllocator, 1);
  11370         }
  11371 
  11372         const bool hadEmptyBlockBeforeFree = HasEmptyBlock();
  11373         pBlock->m_pMetadata->Free(hAllocation->GetAllocHandle());
  11374         pBlock->PostFree(m_hAllocator);
  11375         VMA_HEAVY_ASSERT(pBlock->Validate());
  11376 
  11377         VMA_DEBUG_LOG_FORMAT("  Freed from MemoryTypeIndex=%" PRIu32, m_MemoryTypeIndex);
  11378 
  11379         const bool canDeleteBlock = m_Blocks.size() > m_MinBlockCount;
  11380         // pBlock became empty after this deallocation.
  11381         if (pBlock->m_pMetadata->IsEmpty())
  11382         {
  11383             // Already had empty block. We don't want to have two, so delete this one.
  11384             if ((hadEmptyBlockBeforeFree || budgetExceeded) && canDeleteBlock)
  11385             {
  11386                 pBlockToDelete = pBlock;
  11387                 Remove(pBlock);
  11388             }
  11389             // else: We now have one empty block - leave it. A hysteresis to avoid allocating whole block back and forth.
  11390         }
  11391         // pBlock didn't become empty, but we have another empty block - find and free that one.
  11392         // (This is optional, heuristics.)
  11393         else if (hadEmptyBlockBeforeFree && canDeleteBlock)
  11394         {
  11395             VmaDeviceMemoryBlock* pLastBlock = m_Blocks.back();
  11396             if (pLastBlock->m_pMetadata->IsEmpty())
  11397             {
  11398                 pBlockToDelete = pLastBlock;
  11399                 m_Blocks.pop_back();
  11400             }
  11401         }
  11402 
  11403         IncrementallySortBlocks();
  11404     }
  11405 
  11406     // Destruction of a free block. Deferred until this point, outside of mutex
  11407     // lock, for performance reason.
  11408     if (pBlockToDelete != VMA_NULL)
  11409     {
  11410         VMA_DEBUG_LOG_FORMAT("    Deleted empty block #%" PRIu32, pBlockToDelete->GetId());
  11411         pBlockToDelete->Destroy(m_hAllocator);
  11412         vma_delete(m_hAllocator, pBlockToDelete);
  11413     }
  11414 
  11415     m_hAllocator->m_Budget.RemoveAllocation(m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex), hAllocation->GetSize());
  11416     m_hAllocator->m_AllocationObjectAllocator.Free(hAllocation);
  11417 }
  11418 
  11419 VkDeviceSize VmaBlockVector::CalcMaxBlockSize() const
  11420 {
  11421     VkDeviceSize result = 0;
  11422     for (size_t i = m_Blocks.size(); i--; )
  11423     {
  11424         result = VMA_MAX(result, m_Blocks[i]->m_pMetadata->GetSize());
  11425         if (result >= m_PreferredBlockSize)
  11426         {
  11427             break;
  11428         }
  11429     }
  11430     return result;
  11431 }
  11432 
  11433 void VmaBlockVector::Remove(VmaDeviceMemoryBlock* pBlock)
  11434 {
  11435     for (uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
  11436     {
  11437         if (m_Blocks[blockIndex] == pBlock)
  11438         {
  11439             VmaVectorRemove(m_Blocks, blockIndex);
  11440             return;
  11441         }
  11442     }
  11443     VMA_ASSERT(0);
  11444 }
  11445 
  11446 void VmaBlockVector::IncrementallySortBlocks()
  11447 {
  11448     if (!m_IncrementalSort)
  11449         return;
  11450     if (m_Algorithm != VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT)
  11451     {
  11452         // Bubble sort only until first swap.
  11453         for (size_t i = 1; i < m_Blocks.size(); ++i)
  11454         {
  11455             if (m_Blocks[i - 1]->m_pMetadata->GetSumFreeSize() > m_Blocks[i]->m_pMetadata->GetSumFreeSize())
  11456             {
  11457                 std::swap(m_Blocks[i - 1], m_Blocks[i]);
  11458                 return;
  11459             }
  11460         }
  11461     }
  11462 }
  11463 
  11464 void VmaBlockVector::SortByFreeSize()
  11465 {
  11466     VMA_SORT(m_Blocks.begin(), m_Blocks.end(),
  11467         [](VmaDeviceMemoryBlock* b1, VmaDeviceMemoryBlock* b2) -> bool
  11468         {
  11469             return b1->m_pMetadata->GetSumFreeSize() < b2->m_pMetadata->GetSumFreeSize();
  11470         });
  11471 }
  11472 
  11473 VkResult VmaBlockVector::AllocateFromBlock(
  11474     VmaDeviceMemoryBlock* pBlock,
  11475     VkDeviceSize size,
  11476     VkDeviceSize alignment,
  11477     VmaAllocationCreateFlags allocFlags,
  11478     void* pUserData,
  11479     VmaSuballocationType suballocType,
  11480     uint32_t strategy,
  11481     VmaAllocation* pAllocation)
  11482 {
  11483     const bool isUpperAddress = (allocFlags & VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT) != 0;
  11484 
  11485     VmaAllocationRequest currRequest = {};
  11486     if (pBlock->m_pMetadata->CreateAllocationRequest(
  11487         size,
  11488         alignment,
  11489         isUpperAddress,
  11490         suballocType,
  11491         strategy,
  11492         &currRequest))
  11493     {
  11494         return CommitAllocationRequest(currRequest, pBlock, alignment, allocFlags, pUserData, suballocType, pAllocation);
  11495     }
  11496     return VK_ERROR_OUT_OF_DEVICE_MEMORY;
  11497 }
  11498 
  11499 VkResult VmaBlockVector::CommitAllocationRequest(
  11500     VmaAllocationRequest& allocRequest,
  11501     VmaDeviceMemoryBlock* pBlock,
  11502     VkDeviceSize alignment,
  11503     VmaAllocationCreateFlags allocFlags,
  11504     void* pUserData,
  11505     VmaSuballocationType suballocType,
  11506     VmaAllocation* pAllocation)
  11507 {
  11508     const bool mapped = (allocFlags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0;
  11509     const bool isUserDataString = (allocFlags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0;
  11510     const bool isMappingAllowed = (allocFlags &
  11511         (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0;
  11512 
  11513     pBlock->PostAlloc(m_hAllocator);
  11514     // Allocate from pCurrBlock.
  11515     if (mapped)
  11516     {
  11517         VkResult res = pBlock->Map(m_hAllocator, 1, VMA_NULL);
  11518         if (res != VK_SUCCESS)
  11519         {
  11520             return res;
  11521         }
  11522     }
  11523 
  11524     *pAllocation = m_hAllocator->m_AllocationObjectAllocator.Allocate(isMappingAllowed);
  11525     pBlock->m_pMetadata->Alloc(allocRequest, suballocType, *pAllocation);
  11526     (*pAllocation)->InitBlockAllocation(
  11527         pBlock,
  11528         allocRequest.allocHandle,
  11529         alignment,
  11530         allocRequest.size, // Not size, as actual allocation size may be larger than requested!
  11531         m_MemoryTypeIndex,
  11532         suballocType,
  11533         mapped);
  11534     VMA_HEAVY_ASSERT(pBlock->Validate());
  11535     if (isUserDataString)
  11536         (*pAllocation)->SetName(m_hAllocator, (const char*)pUserData);
  11537     else
  11538         (*pAllocation)->SetUserData(m_hAllocator, pUserData);
  11539     m_hAllocator->m_Budget.AddAllocation(m_hAllocator->MemoryTypeIndexToHeapIndex(m_MemoryTypeIndex), allocRequest.size);
  11540     if (VMA_DEBUG_INITIALIZE_ALLOCATIONS)
  11541     {
  11542         m_hAllocator->FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
  11543     }
  11544     if (IsCorruptionDetectionEnabled())
  11545     {
  11546         VkResult res = pBlock->WriteMagicValueAfterAllocation(m_hAllocator, (*pAllocation)->GetOffset(), allocRequest.size);
  11547         VMA_ASSERT(res == VK_SUCCESS && "Couldn't map block memory to write magic value.");
  11548     }
  11549     return VK_SUCCESS;
  11550 }
  11551 
  11552 VkResult VmaBlockVector::CreateBlock(VkDeviceSize blockSize, size_t* pNewBlockIndex)
  11553 {
  11554     VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
  11555     allocInfo.pNext = m_pMemoryAllocateNext;
  11556     allocInfo.memoryTypeIndex = m_MemoryTypeIndex;
  11557     allocInfo.allocationSize = blockSize;
  11558 
  11559 #if VMA_BUFFER_DEVICE_ADDRESS
  11560     // Every standalone block can potentially contain a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT - always enable the feature.
  11561     VkMemoryAllocateFlagsInfoKHR allocFlagsInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_FLAGS_INFO_KHR };
  11562     if (m_hAllocator->m_UseKhrBufferDeviceAddress)
  11563     {
  11564         allocFlagsInfo.flags = VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT_KHR;
  11565         VmaPnextChainPushFront(&allocInfo, &allocFlagsInfo);
  11566     }
  11567 #endif // VMA_BUFFER_DEVICE_ADDRESS
  11568 
  11569 #if VMA_MEMORY_PRIORITY
  11570     VkMemoryPriorityAllocateInfoEXT priorityInfo = { VK_STRUCTURE_TYPE_MEMORY_PRIORITY_ALLOCATE_INFO_EXT };
  11571     if (m_hAllocator->m_UseExtMemoryPriority)
  11572     {
  11573         VMA_ASSERT(m_Priority >= 0.f && m_Priority <= 1.f);
  11574         priorityInfo.priority = m_Priority;
  11575         VmaPnextChainPushFront(&allocInfo, &priorityInfo);
  11576     }
  11577 #endif // VMA_MEMORY_PRIORITY
  11578 
  11579 #if VMA_EXTERNAL_MEMORY
  11580     // Attach VkExportMemoryAllocateInfoKHR if necessary.
  11581     VkExportMemoryAllocateInfoKHR exportMemoryAllocInfo = { VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR };
  11582     exportMemoryAllocInfo.handleTypes = m_hAllocator->GetExternalMemoryHandleTypeFlags(m_MemoryTypeIndex);
  11583     if (exportMemoryAllocInfo.handleTypes != 0)
  11584     {
  11585         VmaPnextChainPushFront(&allocInfo, &exportMemoryAllocInfo);
  11586     }
  11587 #endif // VMA_EXTERNAL_MEMORY
  11588 
  11589     VkDeviceMemory mem = VK_NULL_HANDLE;
  11590     VkResult res = m_hAllocator->AllocateVulkanMemory(&allocInfo, &mem);
  11591     if (res < 0)
  11592     {
  11593         return res;
  11594     }
  11595 
  11596     // New VkDeviceMemory successfully created.
  11597 
  11598     // Create new Allocation for it.
  11599     VmaDeviceMemoryBlock* const pBlock = vma_new(m_hAllocator, VmaDeviceMemoryBlock)(m_hAllocator);
  11600     pBlock->Init(
  11601         m_hAllocator,
  11602         m_hParentPool,
  11603         m_MemoryTypeIndex,
  11604         mem,
  11605         allocInfo.allocationSize,
  11606         m_NextBlockId++,
  11607         m_Algorithm,
  11608         m_BufferImageGranularity);
  11609 
  11610     m_Blocks.push_back(pBlock);
  11611     if (pNewBlockIndex != VMA_NULL)
  11612     {
  11613         *pNewBlockIndex = m_Blocks.size() - 1;
  11614     }
  11615 
  11616     return VK_SUCCESS;
  11617 }
  11618 
  11619 bool VmaBlockVector::HasEmptyBlock()
  11620 {
  11621     for (size_t index = 0, count = m_Blocks.size(); index < count; ++index)
  11622     {
  11623         VmaDeviceMemoryBlock* const pBlock = m_Blocks[index];
  11624         if (pBlock->m_pMetadata->IsEmpty())
  11625         {
  11626             return true;
  11627         }
  11628     }
  11629     return false;
  11630 }
  11631 
  11632 #if VMA_STATS_STRING_ENABLED
  11633 void VmaBlockVector::PrintDetailedMap(class VmaJsonWriter& json)
  11634 {
  11635     VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
  11636 
  11637 
  11638     json.BeginObject();
  11639     for (size_t i = 0; i < m_Blocks.size(); ++i)
  11640     {
  11641         json.BeginString();
  11642         json.ContinueString(m_Blocks[i]->GetId());
  11643         json.EndString();
  11644 
  11645         json.BeginObject();
  11646         json.WriteString("MapRefCount");
  11647         json.WriteNumber(m_Blocks[i]->GetMapRefCount());
  11648 
  11649         m_Blocks[i]->m_pMetadata->PrintDetailedMap(json);
  11650         json.EndObject();
  11651     }
  11652     json.EndObject();
  11653 }
  11654 #endif // VMA_STATS_STRING_ENABLED
  11655 
  11656 VkResult VmaBlockVector::CheckCorruption()
  11657 {
  11658     if (!IsCorruptionDetectionEnabled())
  11659     {
  11660         return VK_ERROR_FEATURE_NOT_PRESENT;
  11661     }
  11662 
  11663     VmaMutexLockRead lock(m_Mutex, m_hAllocator->m_UseMutex);
  11664     for (uint32_t blockIndex = 0; blockIndex < m_Blocks.size(); ++blockIndex)
  11665     {
  11666         VmaDeviceMemoryBlock* const pBlock = m_Blocks[blockIndex];
  11667         VMA_ASSERT(pBlock);
  11668         VkResult res = pBlock->CheckCorruption(m_hAllocator);
  11669         if (res != VK_SUCCESS)
  11670         {
  11671             return res;
  11672         }
  11673     }
  11674     return VK_SUCCESS;
  11675 }
  11676 
  11677 #endif // _VMA_BLOCK_VECTOR_FUNCTIONS
  11678 
  11679 #ifndef _VMA_DEFRAGMENTATION_CONTEXT_FUNCTIONS
  11680 VmaDefragmentationContext_T::VmaDefragmentationContext_T(
  11681     VmaAllocator hAllocator,
  11682     const VmaDefragmentationInfo& info)
  11683     : m_MaxPassBytes(info.maxBytesPerPass == 0 ? VK_WHOLE_SIZE : info.maxBytesPerPass),
  11684     m_MaxPassAllocations(info.maxAllocationsPerPass == 0 ? UINT32_MAX : info.maxAllocationsPerPass),
  11685     m_BreakCallback(info.pfnBreakCallback),
  11686     m_BreakCallbackUserData(info.pBreakCallbackUserData),
  11687     m_MoveAllocator(hAllocator->GetAllocationCallbacks()),
  11688     m_Moves(m_MoveAllocator)
  11689 {
  11690     m_Algorithm = info.flags & VMA_DEFRAGMENTATION_FLAG_ALGORITHM_MASK;
  11691 
  11692     if (info.pool != VMA_NULL)
  11693     {
  11694         m_BlockVectorCount = 1;
  11695         m_PoolBlockVector = &info.pool->m_BlockVector;
  11696         m_pBlockVectors = &m_PoolBlockVector;
  11697         m_PoolBlockVector->SetIncrementalSort(false);
  11698         m_PoolBlockVector->SortByFreeSize();
  11699     }
  11700     else
  11701     {
  11702         m_BlockVectorCount = hAllocator->GetMemoryTypeCount();
  11703         m_PoolBlockVector = VMA_NULL;
  11704         m_pBlockVectors = hAllocator->m_pBlockVectors;
  11705         for (uint32_t i = 0; i < m_BlockVectorCount; ++i)
  11706         {
  11707             VmaBlockVector* vector = m_pBlockVectors[i];
  11708             if (vector != VMA_NULL)
  11709             {
  11710                 vector->SetIncrementalSort(false);
  11711                 vector->SortByFreeSize();
  11712             }
  11713         }
  11714     }
  11715 
  11716     switch (m_Algorithm)
  11717     {
  11718     case 0: // Default algorithm
  11719         m_Algorithm = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT;
  11720         m_AlgorithmState = vma_new_array(hAllocator, StateBalanced, m_BlockVectorCount);
  11721         break;
  11722     case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT:
  11723         m_AlgorithmState = vma_new_array(hAllocator, StateBalanced, m_BlockVectorCount);
  11724         break;
  11725     case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT:
  11726         if (hAllocator->GetBufferImageGranularity() > 1)
  11727         {
  11728             m_AlgorithmState = vma_new_array(hAllocator, StateExtensive, m_BlockVectorCount);
  11729         }
  11730         break;
  11731     }
  11732 }
  11733 
  11734 VmaDefragmentationContext_T::~VmaDefragmentationContext_T()
  11735 {
  11736     if (m_PoolBlockVector != VMA_NULL)
  11737     {
  11738         m_PoolBlockVector->SetIncrementalSort(true);
  11739     }
  11740     else
  11741     {
  11742         for (uint32_t i = 0; i < m_BlockVectorCount; ++i)
  11743         {
  11744             VmaBlockVector* vector = m_pBlockVectors[i];
  11745             if (vector != VMA_NULL)
  11746                 vector->SetIncrementalSort(true);
  11747         }
  11748     }
  11749 
  11750     if (m_AlgorithmState)
  11751     {
  11752         switch (m_Algorithm)
  11753         {
  11754         case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT:
  11755             vma_delete_array(m_MoveAllocator.m_pCallbacks, reinterpret_cast<StateBalanced*>(m_AlgorithmState), m_BlockVectorCount);
  11756             break;
  11757         case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT:
  11758             vma_delete_array(m_MoveAllocator.m_pCallbacks, reinterpret_cast<StateExtensive*>(m_AlgorithmState), m_BlockVectorCount);
  11759             break;
  11760         default:
  11761             VMA_ASSERT(0);
  11762         }
  11763     }
  11764 }
  11765 
  11766 VkResult VmaDefragmentationContext_T::DefragmentPassBegin(VmaDefragmentationPassMoveInfo& moveInfo)
  11767 {
  11768     if (m_PoolBlockVector != VMA_NULL)
  11769     {
  11770         VmaMutexLockWrite lock(m_PoolBlockVector->GetMutex(), m_PoolBlockVector->GetAllocator()->m_UseMutex);
  11771 
  11772         if (m_PoolBlockVector->GetBlockCount() > 1)
  11773             ComputeDefragmentation(*m_PoolBlockVector, 0);
  11774         else if (m_PoolBlockVector->GetBlockCount() == 1)
  11775             ReallocWithinBlock(*m_PoolBlockVector, m_PoolBlockVector->GetBlock(0));
  11776     }
  11777     else
  11778     {
  11779         for (uint32_t i = 0; i < m_BlockVectorCount; ++i)
  11780         {
  11781             if (m_pBlockVectors[i] != VMA_NULL)
  11782             {
  11783                 VmaMutexLockWrite lock(m_pBlockVectors[i]->GetMutex(), m_pBlockVectors[i]->GetAllocator()->m_UseMutex);
  11784 
  11785                 if (m_pBlockVectors[i]->GetBlockCount() > 1)
  11786                 {
  11787                     if (ComputeDefragmentation(*m_pBlockVectors[i], i))
  11788                         break;
  11789                 }
  11790                 else if (m_pBlockVectors[i]->GetBlockCount() == 1)
  11791                 {
  11792                     if (ReallocWithinBlock(*m_pBlockVectors[i], m_pBlockVectors[i]->GetBlock(0)))
  11793                         break;
  11794                 }
  11795             }
  11796         }
  11797     }
  11798 
  11799     moveInfo.moveCount = static_cast<uint32_t>(m_Moves.size());
  11800     if (moveInfo.moveCount > 0)
  11801     {
  11802         moveInfo.pMoves = m_Moves.data();
  11803         return VK_INCOMPLETE;
  11804     }
  11805 
  11806     moveInfo.pMoves = VMA_NULL;
  11807     return VK_SUCCESS;
  11808 }
  11809 
  11810 VkResult VmaDefragmentationContext_T::DefragmentPassEnd(VmaDefragmentationPassMoveInfo& moveInfo)
  11811 {
  11812     VMA_ASSERT(moveInfo.moveCount > 0 ? moveInfo.pMoves != VMA_NULL : true);
  11813 
  11814     VkResult result = VK_SUCCESS;
  11815     VmaStlAllocator<FragmentedBlock> blockAllocator(m_MoveAllocator.m_pCallbacks);
  11816     VmaVector<FragmentedBlock, VmaStlAllocator<FragmentedBlock>> immovableBlocks(blockAllocator);
  11817     VmaVector<FragmentedBlock, VmaStlAllocator<FragmentedBlock>> mappedBlocks(blockAllocator);
  11818 
  11819     VmaAllocator allocator = VMA_NULL;
  11820     for (uint32_t i = 0; i < moveInfo.moveCount; ++i)
  11821     {
  11822         VmaDefragmentationMove& move = moveInfo.pMoves[i];
  11823         size_t prevCount = 0, currentCount = 0;
  11824         VkDeviceSize freedBlockSize = 0;
  11825 
  11826         uint32_t vectorIndex;
  11827         VmaBlockVector* vector;
  11828         if (m_PoolBlockVector != VMA_NULL)
  11829         {
  11830             vectorIndex = 0;
  11831             vector = m_PoolBlockVector;
  11832         }
  11833         else
  11834         {
  11835             vectorIndex = move.srcAllocation->GetMemoryTypeIndex();
  11836             vector = m_pBlockVectors[vectorIndex];
  11837             VMA_ASSERT(vector != VMA_NULL);
  11838         }
  11839 
  11840         switch (move.operation)
  11841         {
  11842         case VMA_DEFRAGMENTATION_MOVE_OPERATION_COPY:
  11843         {
  11844             uint8_t mapCount = move.srcAllocation->SwapBlockAllocation(vector->m_hAllocator, move.dstTmpAllocation);
  11845             if (mapCount > 0)
  11846             {
  11847                 allocator = vector->m_hAllocator;
  11848                 VmaDeviceMemoryBlock* newMapBlock = move.srcAllocation->GetBlock();
  11849                 bool notPresent = true;
  11850                 for (FragmentedBlock& block : mappedBlocks)
  11851                 {
  11852                     if (block.block == newMapBlock)
  11853                     {
  11854                         notPresent = false;
  11855                         block.data += mapCount;
  11856                         break;
  11857                     }
  11858                 }
  11859                 if (notPresent)
  11860                     mappedBlocks.push_back({ mapCount, newMapBlock });
  11861             }
  11862 
  11863             // Scope for locks, Free have it's own lock
  11864             {
  11865                 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
  11866                 prevCount = vector->GetBlockCount();
  11867                 freedBlockSize = move.dstTmpAllocation->GetBlock()->m_pMetadata->GetSize();
  11868             }
  11869             vector->Free(move.dstTmpAllocation);
  11870             {
  11871                 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
  11872                 currentCount = vector->GetBlockCount();
  11873             }
  11874 
  11875             result = VK_INCOMPLETE;
  11876             break;
  11877         }
  11878         case VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE:
  11879         {
  11880             m_PassStats.bytesMoved -= move.srcAllocation->GetSize();
  11881             --m_PassStats.allocationsMoved;
  11882             vector->Free(move.dstTmpAllocation);
  11883 
  11884             VmaDeviceMemoryBlock* newBlock = move.srcAllocation->GetBlock();
  11885             bool notPresent = true;
  11886             for (const FragmentedBlock& block : immovableBlocks)
  11887             {
  11888                 if (block.block == newBlock)
  11889                 {
  11890                     notPresent = false;
  11891                     break;
  11892                 }
  11893             }
  11894             if (notPresent)
  11895                 immovableBlocks.push_back({ vectorIndex, newBlock });
  11896             break;
  11897         }
  11898         case VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY:
  11899         {
  11900             m_PassStats.bytesMoved -= move.srcAllocation->GetSize();
  11901             --m_PassStats.allocationsMoved;
  11902             // Scope for locks, Free have it's own lock
  11903             {
  11904                 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
  11905                 prevCount = vector->GetBlockCount();
  11906                 freedBlockSize = move.srcAllocation->GetBlock()->m_pMetadata->GetSize();
  11907             }
  11908             vector->Free(move.srcAllocation);
  11909             {
  11910                 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
  11911                 currentCount = vector->GetBlockCount();
  11912             }
  11913             freedBlockSize *= prevCount - currentCount;
  11914 
  11915             VkDeviceSize dstBlockSize;
  11916             {
  11917                 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
  11918                 dstBlockSize = move.dstTmpAllocation->GetBlock()->m_pMetadata->GetSize();
  11919             }
  11920             vector->Free(move.dstTmpAllocation);
  11921             {
  11922                 VmaMutexLockRead lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
  11923                 freedBlockSize += dstBlockSize * (currentCount - vector->GetBlockCount());
  11924                 currentCount = vector->GetBlockCount();
  11925             }
  11926 
  11927             result = VK_INCOMPLETE;
  11928             break;
  11929         }
  11930         default:
  11931             VMA_ASSERT(0);
  11932         }
  11933 
  11934         if (prevCount > currentCount)
  11935         {
  11936             size_t freedBlocks = prevCount - currentCount;
  11937             m_PassStats.deviceMemoryBlocksFreed += static_cast<uint32_t>(freedBlocks);
  11938             m_PassStats.bytesFreed += freedBlockSize;
  11939         }
  11940 
  11941         if(m_Algorithm == VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT &&
  11942             m_AlgorithmState != VMA_NULL)
  11943         {
  11944             // Avoid unnecessary tries to allocate when new free block is available
  11945             StateExtensive& state = reinterpret_cast<StateExtensive*>(m_AlgorithmState)[vectorIndex];
  11946             if (state.firstFreeBlock != SIZE_MAX)
  11947             {
  11948                 const size_t diff = prevCount - currentCount;
  11949                 if (state.firstFreeBlock >= diff)
  11950                 {
  11951                     state.firstFreeBlock -= diff;
  11952                     if (state.firstFreeBlock != 0)
  11953                         state.firstFreeBlock -= vector->GetBlock(state.firstFreeBlock - 1)->m_pMetadata->IsEmpty();
  11954                 }
  11955                 else
  11956                     state.firstFreeBlock = 0;
  11957             }
  11958         }
  11959     }
  11960     moveInfo.moveCount = 0;
  11961     moveInfo.pMoves = VMA_NULL;
  11962     m_Moves.clear();
  11963 
  11964     // Update stats
  11965     m_GlobalStats.allocationsMoved += m_PassStats.allocationsMoved;
  11966     m_GlobalStats.bytesFreed += m_PassStats.bytesFreed;
  11967     m_GlobalStats.bytesMoved += m_PassStats.bytesMoved;
  11968     m_GlobalStats.deviceMemoryBlocksFreed += m_PassStats.deviceMemoryBlocksFreed;
  11969     m_PassStats = { 0 };
  11970 
  11971     // Move blocks with immovable allocations according to algorithm
  11972     if (immovableBlocks.size() > 0)
  11973     {
  11974         do
  11975         {
  11976             if(m_Algorithm == VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT)
  11977             {
  11978                 if (m_AlgorithmState != VMA_NULL)
  11979                 {
  11980                     bool swapped = false;
  11981                     // Move to the start of free blocks range
  11982                     for (const FragmentedBlock& block : immovableBlocks)
  11983                     {
  11984                         StateExtensive& state = reinterpret_cast<StateExtensive*>(m_AlgorithmState)[block.data];
  11985                         if (state.operation != StateExtensive::Operation::Cleanup)
  11986                         {
  11987                             VmaBlockVector* vector = m_pBlockVectors[block.data];
  11988                             VmaMutexLockWrite lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
  11989 
  11990                             for (size_t i = 0, count = vector->GetBlockCount() - m_ImmovableBlockCount; i < count; ++i)
  11991                             {
  11992                                 if (vector->GetBlock(i) == block.block)
  11993                                 {
  11994                                     std::swap(vector->m_Blocks[i], vector->m_Blocks[vector->GetBlockCount() - ++m_ImmovableBlockCount]);
  11995                                     if (state.firstFreeBlock != SIZE_MAX)
  11996                                     {
  11997                                         if (i + 1 < state.firstFreeBlock)
  11998                                         {
  11999                                             if (state.firstFreeBlock > 1)
  12000                                                 std::swap(vector->m_Blocks[i], vector->m_Blocks[--state.firstFreeBlock]);
  12001                                             else
  12002                                                 --state.firstFreeBlock;
  12003                                         }
  12004                                     }
  12005                                     swapped = true;
  12006                                     break;
  12007                                 }
  12008                             }
  12009                         }
  12010                     }
  12011                     if (swapped)
  12012                         result = VK_INCOMPLETE;
  12013                     break;
  12014                 }
  12015             }
  12016 
  12017             // Move to the beginning
  12018             for (const FragmentedBlock& block : immovableBlocks)
  12019             {
  12020                 VmaBlockVector* vector = m_pBlockVectors[block.data];
  12021                 VmaMutexLockWrite lock(vector->GetMutex(), vector->GetAllocator()->m_UseMutex);
  12022 
  12023                 for (size_t i = m_ImmovableBlockCount; i < vector->GetBlockCount(); ++i)
  12024                 {
  12025                     if (vector->GetBlock(i) == block.block)
  12026                     {
  12027                         std::swap(vector->m_Blocks[i], vector->m_Blocks[m_ImmovableBlockCount++]);
  12028                         break;
  12029                     }
  12030                 }
  12031             }
  12032         } while (false);
  12033     }
  12034 
  12035     // Bulk-map destination blocks
  12036     for (const FragmentedBlock& block : mappedBlocks)
  12037     {
  12038         VkResult res = block.block->Map(allocator, block.data, VMA_NULL);
  12039         VMA_ASSERT(res == VK_SUCCESS);
  12040     }
  12041     return result;
  12042 }
  12043 
  12044 bool VmaDefragmentationContext_T::ComputeDefragmentation(VmaBlockVector& vector, size_t index)
  12045 {
  12046     switch (m_Algorithm)
  12047     {
  12048     case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT:
  12049         return ComputeDefragmentation_Fast(vector);
  12050     case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_BALANCED_BIT:
  12051         return ComputeDefragmentation_Balanced(vector, index, true);
  12052     case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FULL_BIT:
  12053         return ComputeDefragmentation_Full(vector);
  12054     case VMA_DEFRAGMENTATION_FLAG_ALGORITHM_EXTENSIVE_BIT:
  12055         return ComputeDefragmentation_Extensive(vector, index);
  12056     default:
  12057         VMA_ASSERT(0);
  12058         return ComputeDefragmentation_Balanced(vector, index, true);
  12059     }
  12060 }
  12061 
  12062 VmaDefragmentationContext_T::MoveAllocationData VmaDefragmentationContext_T::GetMoveData(
  12063     VmaAllocHandle handle, VmaBlockMetadata* metadata)
  12064 {
  12065     MoveAllocationData moveData;
  12066     moveData.move.srcAllocation = (VmaAllocation)metadata->GetAllocationUserData(handle);
  12067     moveData.size = moveData.move.srcAllocation->GetSize();
  12068     moveData.alignment = moveData.move.srcAllocation->GetAlignment();
  12069     moveData.type = moveData.move.srcAllocation->GetSuballocationType();
  12070     moveData.flags = 0;
  12071 
  12072     if (moveData.move.srcAllocation->IsPersistentMap())
  12073         moveData.flags |= VMA_ALLOCATION_CREATE_MAPPED_BIT;
  12074     if (moveData.move.srcAllocation->IsMappingAllowed())
  12075         moveData.flags |= VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT;
  12076 
  12077     return moveData;
  12078 }
  12079 
  12080 VmaDefragmentationContext_T::CounterStatus VmaDefragmentationContext_T::CheckCounters(VkDeviceSize bytes)
  12081 {
  12082     // Check custom criteria if exists
  12083     if (m_BreakCallback && m_BreakCallback(m_BreakCallbackUserData))
  12084         return CounterStatus::End;
  12085 
  12086     // Ignore allocation if will exceed max size for copy
  12087     if (m_PassStats.bytesMoved + bytes > m_MaxPassBytes)
  12088     {
  12089         if (++m_IgnoredAllocs < MAX_ALLOCS_TO_IGNORE)
  12090             return CounterStatus::Ignore;
  12091         else
  12092             return CounterStatus::End;
  12093     }
  12094     else
  12095         m_IgnoredAllocs = 0;
  12096     return CounterStatus::Pass;
  12097 }
  12098 
  12099 bool VmaDefragmentationContext_T::IncrementCounters(VkDeviceSize bytes)
  12100 {
  12101     m_PassStats.bytesMoved += bytes;
  12102     // Early return when max found
  12103     if (++m_PassStats.allocationsMoved >= m_MaxPassAllocations || m_PassStats.bytesMoved >= m_MaxPassBytes)
  12104     {
  12105         VMA_ASSERT((m_PassStats.allocationsMoved == m_MaxPassAllocations ||
  12106             m_PassStats.bytesMoved == m_MaxPassBytes) && "Exceeded maximal pass threshold!");
  12107         return true;
  12108     }
  12109     return false;
  12110 }
  12111 
  12112 bool VmaDefragmentationContext_T::ReallocWithinBlock(VmaBlockVector& vector, VmaDeviceMemoryBlock* block)
  12113 {
  12114     VmaBlockMetadata* metadata = block->m_pMetadata;
  12115 
  12116     for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
  12117         handle != VK_NULL_HANDLE;
  12118         handle = metadata->GetNextAllocation(handle))
  12119     {
  12120         MoveAllocationData moveData = GetMoveData(handle, metadata);
  12121         // Ignore newly created allocations by defragmentation algorithm
  12122         if (moveData.move.srcAllocation->GetUserData() == this)
  12123             continue;
  12124         switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
  12125         {
  12126         case CounterStatus::Ignore:
  12127             continue;
  12128         case CounterStatus::End:
  12129             return true;
  12130         case CounterStatus::Pass:
  12131             break;
  12132         default:
  12133             VMA_ASSERT(0);
  12134         }
  12135 
  12136         VkDeviceSize offset = moveData.move.srcAllocation->GetOffset();
  12137         if (offset != 0 && metadata->GetSumFreeSize() >= moveData.size)
  12138         {
  12139             VmaAllocationRequest request = {};
  12140             if (metadata->CreateAllocationRequest(
  12141                 moveData.size,
  12142                 moveData.alignment,
  12143                 false,
  12144                 moveData.type,
  12145                 VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
  12146                 &request))
  12147             {
  12148                 if (metadata->GetAllocationOffset(request.allocHandle) < offset)
  12149                 {
  12150                     if (vector.CommitAllocationRequest(
  12151                         request,
  12152                         block,
  12153                         moveData.alignment,
  12154                         moveData.flags,
  12155                         this,
  12156                         moveData.type,
  12157                         &moveData.move.dstTmpAllocation) == VK_SUCCESS)
  12158                     {
  12159                         m_Moves.push_back(moveData.move);
  12160                         if (IncrementCounters(moveData.size))
  12161                             return true;
  12162                     }
  12163                 }
  12164             }
  12165         }
  12166     }
  12167     return false;
  12168 }
  12169 
  12170 bool VmaDefragmentationContext_T::AllocInOtherBlock(size_t start, size_t end, MoveAllocationData& data, VmaBlockVector& vector)
  12171 {
  12172     for (; start < end; ++start)
  12173     {
  12174         VmaDeviceMemoryBlock* dstBlock = vector.GetBlock(start);
  12175         if (dstBlock->m_pMetadata->GetSumFreeSize() >= data.size)
  12176         {
  12177             if (vector.AllocateFromBlock(dstBlock,
  12178                 data.size,
  12179                 data.alignment,
  12180                 data.flags,
  12181                 this,
  12182                 data.type,
  12183                 0,
  12184                 &data.move.dstTmpAllocation) == VK_SUCCESS)
  12185             {
  12186                 m_Moves.push_back(data.move);
  12187                 if (IncrementCounters(data.size))
  12188                     return true;
  12189                 break;
  12190             }
  12191         }
  12192     }
  12193     return false;
  12194 }
  12195 
  12196 bool VmaDefragmentationContext_T::ComputeDefragmentation_Fast(VmaBlockVector& vector)
  12197 {
  12198     // Move only between blocks
  12199 
  12200     // Go through allocations in last blocks and try to fit them inside first ones
  12201     for (size_t i = vector.GetBlockCount() - 1; i > m_ImmovableBlockCount; --i)
  12202     {
  12203         VmaBlockMetadata* metadata = vector.GetBlock(i)->m_pMetadata;
  12204 
  12205         for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
  12206             handle != VK_NULL_HANDLE;
  12207             handle = metadata->GetNextAllocation(handle))
  12208         {
  12209             MoveAllocationData moveData = GetMoveData(handle, metadata);
  12210             // Ignore newly created allocations by defragmentation algorithm
  12211             if (moveData.move.srcAllocation->GetUserData() == this)
  12212                 continue;
  12213             switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
  12214             {
  12215             case CounterStatus::Ignore:
  12216                 continue;
  12217             case CounterStatus::End:
  12218                 return true;
  12219             case CounterStatus::Pass:
  12220                 break;
  12221             default:
  12222                 VMA_ASSERT(0);
  12223             }
  12224 
  12225             // Check all previous blocks for free space
  12226             if (AllocInOtherBlock(0, i, moveData, vector))
  12227                 return true;
  12228         }
  12229     }
  12230     return false;
  12231 }
  12232 
  12233 bool VmaDefragmentationContext_T::ComputeDefragmentation_Balanced(VmaBlockVector& vector, size_t index, bool update)
  12234 {
  12235     // Go over every allocation and try to fit it in previous blocks at lowest offsets,
  12236     // if not possible: realloc within single block to minimize offset (exclude offset == 0),
  12237     // but only if there are noticeable gaps between them (some heuristic, ex. average size of allocation in block)
  12238     VMA_ASSERT(m_AlgorithmState != VMA_NULL);
  12239 
  12240     StateBalanced& vectorState = reinterpret_cast<StateBalanced*>(m_AlgorithmState)[index];
  12241     if (update && vectorState.avgAllocSize == UINT64_MAX)
  12242         UpdateVectorStatistics(vector, vectorState);
  12243 
  12244     const size_t startMoveCount = m_Moves.size();
  12245     VkDeviceSize minimalFreeRegion = vectorState.avgFreeSize / 2;
  12246     for (size_t i = vector.GetBlockCount() - 1; i > m_ImmovableBlockCount; --i)
  12247     {
  12248         VmaDeviceMemoryBlock* block = vector.GetBlock(i);
  12249         VmaBlockMetadata* metadata = block->m_pMetadata;
  12250         VkDeviceSize prevFreeRegionSize = 0;
  12251 
  12252         for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
  12253             handle != VK_NULL_HANDLE;
  12254             handle = metadata->GetNextAllocation(handle))
  12255         {
  12256             MoveAllocationData moveData = GetMoveData(handle, metadata);
  12257             // Ignore newly created allocations by defragmentation algorithm
  12258             if (moveData.move.srcAllocation->GetUserData() == this)
  12259                 continue;
  12260             switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
  12261             {
  12262             case CounterStatus::Ignore:
  12263                 continue;
  12264             case CounterStatus::End:
  12265                 return true;
  12266             case CounterStatus::Pass:
  12267                 break;
  12268             default:
  12269                 VMA_ASSERT(0);
  12270             }
  12271 
  12272             // Check all previous blocks for free space
  12273             const size_t prevMoveCount = m_Moves.size();
  12274             if (AllocInOtherBlock(0, i, moveData, vector))
  12275                 return true;
  12276 
  12277             VkDeviceSize nextFreeRegionSize = metadata->GetNextFreeRegionSize(handle);
  12278             // If no room found then realloc within block for lower offset
  12279             VkDeviceSize offset = moveData.move.srcAllocation->GetOffset();
  12280             if (prevMoveCount == m_Moves.size() && offset != 0 && metadata->GetSumFreeSize() >= moveData.size)
  12281             {
  12282                 // Check if realloc will make sense
  12283                 if (prevFreeRegionSize >= minimalFreeRegion ||
  12284                     nextFreeRegionSize >= minimalFreeRegion ||
  12285                     moveData.size <= vectorState.avgFreeSize ||
  12286                     moveData.size <= vectorState.avgAllocSize)
  12287                 {
  12288                     VmaAllocationRequest request = {};
  12289                     if (metadata->CreateAllocationRequest(
  12290                         moveData.size,
  12291                         moveData.alignment,
  12292                         false,
  12293                         moveData.type,
  12294                         VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
  12295                         &request))
  12296                     {
  12297                         if (metadata->GetAllocationOffset(request.allocHandle) < offset)
  12298                         {
  12299                             if (vector.CommitAllocationRequest(
  12300                                 request,
  12301                                 block,
  12302                                 moveData.alignment,
  12303                                 moveData.flags,
  12304                                 this,
  12305                                 moveData.type,
  12306                                 &moveData.move.dstTmpAllocation) == VK_SUCCESS)
  12307                             {
  12308                                 m_Moves.push_back(moveData.move);
  12309                                 if (IncrementCounters(moveData.size))
  12310                                     return true;
  12311                             }
  12312                         }
  12313                     }
  12314                 }
  12315             }
  12316             prevFreeRegionSize = nextFreeRegionSize;
  12317         }
  12318     }
  12319 
  12320     // No moves performed, update statistics to current vector state
  12321     if (startMoveCount == m_Moves.size() && !update)
  12322     {
  12323         vectorState.avgAllocSize = UINT64_MAX;
  12324         return ComputeDefragmentation_Balanced(vector, index, false);
  12325     }
  12326     return false;
  12327 }
  12328 
  12329 bool VmaDefragmentationContext_T::ComputeDefragmentation_Full(VmaBlockVector& vector)
  12330 {
  12331     // Go over every allocation and try to fit it in previous blocks at lowest offsets,
  12332     // if not possible: realloc within single block to minimize offset (exclude offset == 0)
  12333 
  12334     for (size_t i = vector.GetBlockCount() - 1; i > m_ImmovableBlockCount; --i)
  12335     {
  12336         VmaDeviceMemoryBlock* block = vector.GetBlock(i);
  12337         VmaBlockMetadata* metadata = block->m_pMetadata;
  12338 
  12339         for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
  12340             handle != VK_NULL_HANDLE;
  12341             handle = metadata->GetNextAllocation(handle))
  12342         {
  12343             MoveAllocationData moveData = GetMoveData(handle, metadata);
  12344             // Ignore newly created allocations by defragmentation algorithm
  12345             if (moveData.move.srcAllocation->GetUserData() == this)
  12346                 continue;
  12347             switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
  12348             {
  12349             case CounterStatus::Ignore:
  12350                 continue;
  12351             case CounterStatus::End:
  12352                 return true;
  12353             case CounterStatus::Pass:
  12354                 break;
  12355             default:
  12356                 VMA_ASSERT(0);
  12357             }
  12358 
  12359             // Check all previous blocks for free space
  12360             const size_t prevMoveCount = m_Moves.size();
  12361             if (AllocInOtherBlock(0, i, moveData, vector))
  12362                 return true;
  12363 
  12364             // If no room found then realloc within block for lower offset
  12365             VkDeviceSize offset = moveData.move.srcAllocation->GetOffset();
  12366             if (prevMoveCount == m_Moves.size() && offset != 0 && metadata->GetSumFreeSize() >= moveData.size)
  12367             {
  12368                 VmaAllocationRequest request = {};
  12369                 if (metadata->CreateAllocationRequest(
  12370                     moveData.size,
  12371                     moveData.alignment,
  12372                     false,
  12373                     moveData.type,
  12374                     VMA_ALLOCATION_CREATE_STRATEGY_MIN_OFFSET_BIT,
  12375                     &request))
  12376                 {
  12377                     if (metadata->GetAllocationOffset(request.allocHandle) < offset)
  12378                     {
  12379                         if (vector.CommitAllocationRequest(
  12380                             request,
  12381                             block,
  12382                             moveData.alignment,
  12383                             moveData.flags,
  12384                             this,
  12385                             moveData.type,
  12386                             &moveData.move.dstTmpAllocation) == VK_SUCCESS)
  12387                         {
  12388                             m_Moves.push_back(moveData.move);
  12389                             if (IncrementCounters(moveData.size))
  12390                                 return true;
  12391                         }
  12392                     }
  12393                 }
  12394             }
  12395         }
  12396     }
  12397     return false;
  12398 }
  12399 
  12400 bool VmaDefragmentationContext_T::ComputeDefragmentation_Extensive(VmaBlockVector& vector, size_t index)
  12401 {
  12402     // First free single block, then populate it to the brim, then free another block, and so on
  12403 
  12404     // Fallback to previous algorithm since without granularity conflicts it can achieve max packing
  12405     if (vector.m_BufferImageGranularity == 1)
  12406         return ComputeDefragmentation_Full(vector);
  12407 
  12408     VMA_ASSERT(m_AlgorithmState != VMA_NULL);
  12409 
  12410     StateExtensive& vectorState = reinterpret_cast<StateExtensive*>(m_AlgorithmState)[index];
  12411 
  12412     bool texturePresent = false, bufferPresent = false, otherPresent = false;
  12413     switch (vectorState.operation)
  12414     {
  12415     case StateExtensive::Operation::Done: // Vector defragmented
  12416         return false;
  12417     case StateExtensive::Operation::FindFreeBlockBuffer:
  12418     case StateExtensive::Operation::FindFreeBlockTexture:
  12419     case StateExtensive::Operation::FindFreeBlockAll:
  12420     {
  12421         // No more blocks to free, just perform fast realloc and move to cleanup
  12422         if (vectorState.firstFreeBlock == 0)
  12423         {
  12424             vectorState.operation = StateExtensive::Operation::Cleanup;
  12425             return ComputeDefragmentation_Fast(vector);
  12426         }
  12427 
  12428         // No free blocks, have to clear last one
  12429         size_t last = (vectorState.firstFreeBlock == SIZE_MAX ? vector.GetBlockCount() : vectorState.firstFreeBlock) - 1;
  12430         VmaBlockMetadata* freeMetadata = vector.GetBlock(last)->m_pMetadata;
  12431 
  12432         const size_t prevMoveCount = m_Moves.size();
  12433         for (VmaAllocHandle handle = freeMetadata->GetAllocationListBegin();
  12434             handle != VK_NULL_HANDLE;
  12435             handle = freeMetadata->GetNextAllocation(handle))
  12436         {
  12437             MoveAllocationData moveData = GetMoveData(handle, freeMetadata);
  12438             switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
  12439             {
  12440             case CounterStatus::Ignore:
  12441                 continue;
  12442             case CounterStatus::End:
  12443                 return true;
  12444             case CounterStatus::Pass:
  12445                 break;
  12446             default:
  12447                 VMA_ASSERT(0);
  12448             }
  12449 
  12450             // Check all previous blocks for free space
  12451             if (AllocInOtherBlock(0, last, moveData, vector))
  12452             {
  12453                 // Full clear performed already
  12454                 if (prevMoveCount != m_Moves.size() && freeMetadata->GetNextAllocation(handle) == VK_NULL_HANDLE)
  12455                     vectorState.firstFreeBlock = last;
  12456                 return true;
  12457             }
  12458         }
  12459 
  12460         if (prevMoveCount == m_Moves.size())
  12461         {
  12462             // Cannot perform full clear, have to move data in other blocks around
  12463             if (last != 0)
  12464             {
  12465                 for (size_t i = last - 1; i; --i)
  12466                 {
  12467                     if (ReallocWithinBlock(vector, vector.GetBlock(i)))
  12468                         return true;
  12469                 }
  12470             }
  12471 
  12472             if (prevMoveCount == m_Moves.size())
  12473             {
  12474                 // No possible reallocs within blocks, try to move them around fast
  12475                 return ComputeDefragmentation_Fast(vector);
  12476             }
  12477         }
  12478         else
  12479         {
  12480             switch (vectorState.operation)
  12481             {
  12482             case StateExtensive::Operation::FindFreeBlockBuffer:
  12483                 vectorState.operation = StateExtensive::Operation::MoveBuffers;
  12484                 break;
  12485             case StateExtensive::Operation::FindFreeBlockTexture:
  12486                 vectorState.operation = StateExtensive::Operation::MoveTextures;
  12487                 break;
  12488             case StateExtensive::Operation::FindFreeBlockAll:
  12489                 vectorState.operation = StateExtensive::Operation::MoveAll;
  12490                 break;
  12491             default:
  12492                 VMA_ASSERT(0);
  12493                 vectorState.operation = StateExtensive::Operation::MoveTextures;
  12494             }
  12495             vectorState.firstFreeBlock = last;
  12496             // Nothing done, block found without reallocations, can perform another reallocs in same pass
  12497             return ComputeDefragmentation_Extensive(vector, index);
  12498         }
  12499         break;
  12500     }
  12501     case StateExtensive::Operation::MoveTextures:
  12502     {
  12503         if (MoveDataToFreeBlocks(VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL, vector,
  12504             vectorState.firstFreeBlock, texturePresent, bufferPresent, otherPresent))
  12505         {
  12506             if (texturePresent)
  12507             {
  12508                 vectorState.operation = StateExtensive::Operation::FindFreeBlockTexture;
  12509                 return ComputeDefragmentation_Extensive(vector, index);
  12510             }
  12511 
  12512             if (!bufferPresent && !otherPresent)
  12513             {
  12514                 vectorState.operation = StateExtensive::Operation::Cleanup;
  12515                 break;
  12516             }
  12517 
  12518             // No more textures to move, check buffers
  12519             vectorState.operation = StateExtensive::Operation::MoveBuffers;
  12520             bufferPresent = false;
  12521             otherPresent = false;
  12522         }
  12523         else
  12524             break;
  12525         VMA_FALLTHROUGH; // Fallthrough
  12526     }
  12527     case StateExtensive::Operation::MoveBuffers:
  12528     {
  12529         if (MoveDataToFreeBlocks(VMA_SUBALLOCATION_TYPE_BUFFER, vector,
  12530             vectorState.firstFreeBlock, texturePresent, bufferPresent, otherPresent))
  12531         {
  12532             if (bufferPresent)
  12533             {
  12534                 vectorState.operation = StateExtensive::Operation::FindFreeBlockBuffer;
  12535                 return ComputeDefragmentation_Extensive(vector, index);
  12536             }
  12537 
  12538             if (!otherPresent)
  12539             {
  12540                 vectorState.operation = StateExtensive::Operation::Cleanup;
  12541                 break;
  12542             }
  12543 
  12544             // No more buffers to move, check all others
  12545             vectorState.operation = StateExtensive::Operation::MoveAll;
  12546             otherPresent = false;
  12547         }
  12548         else
  12549             break;
  12550         VMA_FALLTHROUGH; // Fallthrough
  12551     }
  12552     case StateExtensive::Operation::MoveAll:
  12553     {
  12554         if (MoveDataToFreeBlocks(VMA_SUBALLOCATION_TYPE_FREE, vector,
  12555             vectorState.firstFreeBlock, texturePresent, bufferPresent, otherPresent))
  12556         {
  12557             if (otherPresent)
  12558             {
  12559                 vectorState.operation = StateExtensive::Operation::FindFreeBlockBuffer;
  12560                 return ComputeDefragmentation_Extensive(vector, index);
  12561             }
  12562             // Everything moved
  12563             vectorState.operation = StateExtensive::Operation::Cleanup;
  12564         }
  12565         break;
  12566     }
  12567     case StateExtensive::Operation::Cleanup:
  12568         // Cleanup is handled below so that other operations may reuse the cleanup code. This case is here to prevent the unhandled enum value warning (C4062).
  12569         break;
  12570     }
  12571 
  12572     if (vectorState.operation == StateExtensive::Operation::Cleanup)
  12573     {
  12574         // All other work done, pack data in blocks even tighter if possible
  12575         const size_t prevMoveCount = m_Moves.size();
  12576         for (size_t i = 0; i < vector.GetBlockCount(); ++i)
  12577         {
  12578             if (ReallocWithinBlock(vector, vector.GetBlock(i)))
  12579                 return true;
  12580         }
  12581 
  12582         if (prevMoveCount == m_Moves.size())
  12583             vectorState.operation = StateExtensive::Operation::Done;
  12584     }
  12585     return false;
  12586 }
  12587 
  12588 void VmaDefragmentationContext_T::UpdateVectorStatistics(VmaBlockVector& vector, StateBalanced& state)
  12589 {
  12590     size_t allocCount = 0;
  12591     size_t freeCount = 0;
  12592     state.avgFreeSize = 0;
  12593     state.avgAllocSize = 0;
  12594 
  12595     for (size_t i = 0; i < vector.GetBlockCount(); ++i)
  12596     {
  12597         VmaBlockMetadata* metadata = vector.GetBlock(i)->m_pMetadata;
  12598 
  12599         allocCount += metadata->GetAllocationCount();
  12600         freeCount += metadata->GetFreeRegionsCount();
  12601         state.avgFreeSize += metadata->GetSumFreeSize();
  12602         state.avgAllocSize += metadata->GetSize();
  12603     }
  12604 
  12605     state.avgAllocSize = (state.avgAllocSize - state.avgFreeSize) / allocCount;
  12606     state.avgFreeSize /= freeCount;
  12607 }
  12608 
  12609 bool VmaDefragmentationContext_T::MoveDataToFreeBlocks(VmaSuballocationType currentType,
  12610     VmaBlockVector& vector, size_t firstFreeBlock,
  12611     bool& texturePresent, bool& bufferPresent, bool& otherPresent)
  12612 {
  12613     const size_t prevMoveCount = m_Moves.size();
  12614     for (size_t i = firstFreeBlock ; i;)
  12615     {
  12616         VmaDeviceMemoryBlock* block = vector.GetBlock(--i);
  12617         VmaBlockMetadata* metadata = block->m_pMetadata;
  12618 
  12619         for (VmaAllocHandle handle = metadata->GetAllocationListBegin();
  12620             handle != VK_NULL_HANDLE;
  12621             handle = metadata->GetNextAllocation(handle))
  12622         {
  12623             MoveAllocationData moveData = GetMoveData(handle, metadata);
  12624             // Ignore newly created allocations by defragmentation algorithm
  12625             if (moveData.move.srcAllocation->GetUserData() == this)
  12626                 continue;
  12627             switch (CheckCounters(moveData.move.srcAllocation->GetSize()))
  12628             {
  12629             case CounterStatus::Ignore:
  12630                 continue;
  12631             case CounterStatus::End:
  12632                 return true;
  12633             case CounterStatus::Pass:
  12634                 break;
  12635             default:
  12636                 VMA_ASSERT(0);
  12637             }
  12638 
  12639             // Move only single type of resources at once
  12640             if (!VmaIsBufferImageGranularityConflict(moveData.type, currentType))
  12641             {
  12642                 // Try to fit allocation into free blocks
  12643                 if (AllocInOtherBlock(firstFreeBlock, vector.GetBlockCount(), moveData, vector))
  12644                     return false;
  12645             }
  12646 
  12647             if (!VmaIsBufferImageGranularityConflict(moveData.type, VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL))
  12648                 texturePresent = true;
  12649             else if (!VmaIsBufferImageGranularityConflict(moveData.type, VMA_SUBALLOCATION_TYPE_BUFFER))
  12650                 bufferPresent = true;
  12651             else
  12652                 otherPresent = true;
  12653         }
  12654     }
  12655     return prevMoveCount == m_Moves.size();
  12656 }
  12657 #endif // _VMA_DEFRAGMENTATION_CONTEXT_FUNCTIONS
  12658 
  12659 #ifndef _VMA_POOL_T_FUNCTIONS
  12660 VmaPool_T::VmaPool_T(
  12661     VmaAllocator hAllocator,
  12662     const VmaPoolCreateInfo& createInfo,
  12663     VkDeviceSize preferredBlockSize)
  12664     : m_BlockVector(
  12665         hAllocator,
  12666         this, // hParentPool
  12667         createInfo.memoryTypeIndex,
  12668         createInfo.blockSize != 0 ? createInfo.blockSize : preferredBlockSize,
  12669         createInfo.minBlockCount,
  12670         createInfo.maxBlockCount,
  12671         (createInfo.flags& VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT) != 0 ? 1 : hAllocator->GetBufferImageGranularity(),
  12672         createInfo.blockSize != 0, // explicitBlockSize
  12673         createInfo.flags & VMA_POOL_CREATE_ALGORITHM_MASK, // algorithm
  12674         createInfo.priority,
  12675         VMA_MAX(hAllocator->GetMemoryTypeMinAlignment(createInfo.memoryTypeIndex), createInfo.minAllocationAlignment),
  12676         createInfo.pMemoryAllocateNext),
  12677     m_Id(0),
  12678     m_Name(VMA_NULL) {}
  12679 
  12680 VmaPool_T::~VmaPool_T()
  12681 {
  12682     VMA_ASSERT(m_PrevPool == VMA_NULL && m_NextPool == VMA_NULL);
  12683 
  12684     const VkAllocationCallbacks* allocs = m_BlockVector.GetAllocator()->GetAllocationCallbacks();
  12685     VmaFreeString(allocs, m_Name);
  12686 }
  12687 
  12688 void VmaPool_T::SetName(const char* pName)
  12689 {
  12690     const VkAllocationCallbacks* allocs = m_BlockVector.GetAllocator()->GetAllocationCallbacks();
  12691     VmaFreeString(allocs, m_Name);
  12692 
  12693     if (pName != VMA_NULL)
  12694     {
  12695         m_Name = VmaCreateStringCopy(allocs, pName);
  12696     }
  12697     else
  12698     {
  12699         m_Name = VMA_NULL;
  12700     }
  12701 }
  12702 #endif // _VMA_POOL_T_FUNCTIONS
  12703 
  12704 #ifndef _VMA_ALLOCATOR_T_FUNCTIONS
  12705 VmaAllocator_T::VmaAllocator_T(const VmaAllocatorCreateInfo* pCreateInfo) :
  12706     m_UseMutex((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT) == 0),
  12707     m_VulkanApiVersion(pCreateInfo->vulkanApiVersion != 0 ? pCreateInfo->vulkanApiVersion : VK_API_VERSION_1_0),
  12708     m_UseKhrDedicatedAllocation((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT) != 0),
  12709     m_UseKhrBindMemory2((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT) != 0),
  12710     m_UseExtMemoryBudget((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT) != 0),
  12711     m_UseAmdDeviceCoherentMemory((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT) != 0),
  12712     m_UseKhrBufferDeviceAddress((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT) != 0),
  12713     m_UseExtMemoryPriority((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT) != 0),
  12714     m_UseKhrMaintenance4((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE4_BIT) != 0),
  12715     m_UseKhrMaintenance5((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE5_BIT) != 0),
  12716     m_hDevice(pCreateInfo->device),
  12717     m_hInstance(pCreateInfo->instance),
  12718     m_AllocationCallbacksSpecified(pCreateInfo->pAllocationCallbacks != VMA_NULL),
  12719     m_AllocationCallbacks(pCreateInfo->pAllocationCallbacks ?
  12720         *pCreateInfo->pAllocationCallbacks : VmaEmptyAllocationCallbacks),
  12721     m_AllocationObjectAllocator(&m_AllocationCallbacks),
  12722     m_HeapSizeLimitMask(0),
  12723     m_DeviceMemoryCount(0),
  12724     m_PreferredLargeHeapBlockSize(0),
  12725     m_PhysicalDevice(pCreateInfo->physicalDevice),
  12726     m_GpuDefragmentationMemoryTypeBits(UINT32_MAX),
  12727     m_NextPoolId(0),
  12728     m_GlobalMemoryTypeBits(UINT32_MAX)
  12729 {
  12730     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
  12731     {
  12732         m_UseKhrDedicatedAllocation = false;
  12733         m_UseKhrBindMemory2 = false;
  12734     }
  12735 
  12736     if(VMA_DEBUG_DETECT_CORRUPTION)
  12737     {
  12738         // Needs to be multiply of uint32_t size because we are going to write VMA_CORRUPTION_DETECTION_MAGIC_VALUE to it.
  12739         VMA_ASSERT(VMA_DEBUG_MARGIN % sizeof(uint32_t) == 0);
  12740     }
  12741 
  12742     VMA_ASSERT(pCreateInfo->physicalDevice && pCreateInfo->device && pCreateInfo->instance);
  12743 
  12744     if(m_VulkanApiVersion < VK_MAKE_VERSION(1, 1, 0))
  12745     {
  12746 #if !(VMA_DEDICATED_ALLOCATION)
  12747         if((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT) != 0)
  12748         {
  12749             VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT set but required extensions are disabled by preprocessor macros.");
  12750         }
  12751 #endif
  12752 #if !(VMA_BIND_MEMORY2)
  12753         if((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT) != 0)
  12754         {
  12755             VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT set but required extension is disabled by preprocessor macros.");
  12756         }
  12757 #endif
  12758     }
  12759 #if !(VMA_MEMORY_BUDGET)
  12760     if((pCreateInfo->flags & VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT) != 0)
  12761     {
  12762         VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT set but required extension is disabled by preprocessor macros.");
  12763     }
  12764 #endif
  12765 #if !(VMA_BUFFER_DEVICE_ADDRESS)
  12766     if(m_UseKhrBufferDeviceAddress)
  12767     {
  12768         VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT is set but required extension or Vulkan 1.2 is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
  12769     }
  12770 #endif
  12771 #if VMA_VULKAN_VERSION < 1003000
  12772     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 3, 0))
  12773     {
  12774         VMA_ASSERT(0 && "vulkanApiVersion >= VK_API_VERSION_1_3 but required Vulkan version is disabled by preprocessor macros.");
  12775     }
  12776 #endif
  12777 #if VMA_VULKAN_VERSION < 1002000
  12778     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 2, 0))
  12779     {
  12780         VMA_ASSERT(0 && "vulkanApiVersion >= VK_API_VERSION_1_2 but required Vulkan version is disabled by preprocessor macros.");
  12781     }
  12782 #endif
  12783 #if VMA_VULKAN_VERSION < 1001000
  12784     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
  12785     {
  12786         VMA_ASSERT(0 && "vulkanApiVersion >= VK_API_VERSION_1_1 but required Vulkan version is disabled by preprocessor macros.");
  12787     }
  12788 #endif
  12789 #if !(VMA_MEMORY_PRIORITY)
  12790     if(m_UseExtMemoryPriority)
  12791     {
  12792         VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT is set but required extension is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
  12793     }
  12794 #endif
  12795 #if !(VMA_KHR_MAINTENANCE4)
  12796     if(m_UseKhrMaintenance4)
  12797     {
  12798         VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE4_BIT is set but required extension is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
  12799     }
  12800 #endif
  12801 #if !(VMA_KHR_MAINTENANCE5)
  12802     if(m_UseKhrMaintenance5)
  12803     {
  12804         VMA_ASSERT(0 && "VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE5_BIT is set but required extension is not available in your Vulkan header or its support in VMA has been disabled by a preprocessor macro.");
  12805     }
  12806 #endif
  12807 
  12808     memset(&m_DeviceMemoryCallbacks, 0 ,sizeof(m_DeviceMemoryCallbacks));
  12809     memset(&m_PhysicalDeviceProperties, 0, sizeof(m_PhysicalDeviceProperties));
  12810     memset(&m_MemProps, 0, sizeof(m_MemProps));
  12811 
  12812     memset(&m_pBlockVectors, 0, sizeof(m_pBlockVectors));
  12813     memset(&m_VulkanFunctions, 0, sizeof(m_VulkanFunctions));
  12814 
  12815 #if VMA_EXTERNAL_MEMORY
  12816     memset(&m_TypeExternalMemoryHandleTypes, 0, sizeof(m_TypeExternalMemoryHandleTypes));
  12817 #endif // #if VMA_EXTERNAL_MEMORY
  12818 
  12819     if(pCreateInfo->pDeviceMemoryCallbacks != VMA_NULL)
  12820     {
  12821         m_DeviceMemoryCallbacks.pUserData = pCreateInfo->pDeviceMemoryCallbacks->pUserData;
  12822         m_DeviceMemoryCallbacks.pfnAllocate = pCreateInfo->pDeviceMemoryCallbacks->pfnAllocate;
  12823         m_DeviceMemoryCallbacks.pfnFree = pCreateInfo->pDeviceMemoryCallbacks->pfnFree;
  12824     }
  12825 
  12826     ImportVulkanFunctions(pCreateInfo->pVulkanFunctions);
  12827 
  12828     (*m_VulkanFunctions.vkGetPhysicalDeviceProperties)(m_PhysicalDevice, &m_PhysicalDeviceProperties);
  12829     (*m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties)(m_PhysicalDevice, &m_MemProps);
  12830 
  12831     VMA_ASSERT(VmaIsPow2(VMA_MIN_ALIGNMENT));
  12832     VMA_ASSERT(VmaIsPow2(VMA_DEBUG_MIN_BUFFER_IMAGE_GRANULARITY));
  12833     VMA_ASSERT(VmaIsPow2(m_PhysicalDeviceProperties.limits.bufferImageGranularity));
  12834     VMA_ASSERT(VmaIsPow2(m_PhysicalDeviceProperties.limits.nonCoherentAtomSize));
  12835 
  12836     m_PreferredLargeHeapBlockSize = (pCreateInfo->preferredLargeHeapBlockSize != 0) ?
  12837         pCreateInfo->preferredLargeHeapBlockSize : static_cast<VkDeviceSize>(VMA_DEFAULT_LARGE_HEAP_BLOCK_SIZE);
  12838 
  12839     m_GlobalMemoryTypeBits = CalculateGlobalMemoryTypeBits();
  12840 
  12841 #if VMA_EXTERNAL_MEMORY
  12842     if(pCreateInfo->pTypeExternalMemoryHandleTypes != VMA_NULL)
  12843     {
  12844         memcpy(m_TypeExternalMemoryHandleTypes, pCreateInfo->pTypeExternalMemoryHandleTypes,
  12845             sizeof(VkExternalMemoryHandleTypeFlagsKHR) * GetMemoryTypeCount());
  12846     }
  12847 #endif // #if VMA_EXTERNAL_MEMORY
  12848 
  12849     if(pCreateInfo->pHeapSizeLimit != VMA_NULL)
  12850     {
  12851         for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
  12852         {
  12853             const VkDeviceSize limit = pCreateInfo->pHeapSizeLimit[heapIndex];
  12854             if(limit != VK_WHOLE_SIZE)
  12855             {
  12856                 m_HeapSizeLimitMask |= 1u << heapIndex;
  12857                 if(limit < m_MemProps.memoryHeaps[heapIndex].size)
  12858                 {
  12859                     m_MemProps.memoryHeaps[heapIndex].size = limit;
  12860                 }
  12861             }
  12862         }
  12863     }
  12864 
  12865     for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
  12866     {
  12867         // Create only supported types
  12868         if((m_GlobalMemoryTypeBits & (1u << memTypeIndex)) != 0)
  12869         {
  12870             const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(memTypeIndex);
  12871             m_pBlockVectors[memTypeIndex] = vma_new(this, VmaBlockVector)(
  12872                 this,
  12873                 VK_NULL_HANDLE, // hParentPool
  12874                 memTypeIndex,
  12875                 preferredBlockSize,
  12876                 0,
  12877                 SIZE_MAX,
  12878                 GetBufferImageGranularity(),
  12879                 false, // explicitBlockSize
  12880                 0, // algorithm
  12881                 0.5f, // priority (0.5 is the default per Vulkan spec)
  12882                 GetMemoryTypeMinAlignment(memTypeIndex), // minAllocationAlignment
  12883                 VMA_NULL); // // pMemoryAllocateNext
  12884             // No need to call m_pBlockVectors[memTypeIndex][blockVectorTypeIndex]->CreateMinBlocks here,
  12885             // because minBlockCount is 0.
  12886         }
  12887     }
  12888 }
  12889 
  12890 VkResult VmaAllocator_T::Init(const VmaAllocatorCreateInfo* pCreateInfo)
  12891 {
  12892     VkResult res = VK_SUCCESS;
  12893 
  12894 #if VMA_MEMORY_BUDGET
  12895     if(m_UseExtMemoryBudget)
  12896     {
  12897         UpdateVulkanBudget();
  12898     }
  12899 #endif // #if VMA_MEMORY_BUDGET
  12900 
  12901     return res;
  12902 }
  12903 
  12904 VmaAllocator_T::~VmaAllocator_T()
  12905 {
  12906     VMA_ASSERT(m_Pools.IsEmpty());
  12907 
  12908     for(size_t memTypeIndex = GetMemoryTypeCount(); memTypeIndex--; )
  12909     {
  12910         vma_delete(this, m_pBlockVectors[memTypeIndex]);
  12911     }
  12912 }
  12913 
  12914 void VmaAllocator_T::ImportVulkanFunctions(const VmaVulkanFunctions* pVulkanFunctions)
  12915 {
  12916 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
  12917     ImportVulkanFunctions_Static();
  12918 #endif
  12919 
  12920     if(pVulkanFunctions != VMA_NULL)
  12921     {
  12922         ImportVulkanFunctions_Custom(pVulkanFunctions);
  12923     }
  12924 
  12925 #if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
  12926     ImportVulkanFunctions_Dynamic();
  12927 #endif
  12928 
  12929     ValidateVulkanFunctions();
  12930 }
  12931 
  12932 #if VMA_STATIC_VULKAN_FUNCTIONS == 1
  12933 
  12934 void VmaAllocator_T::ImportVulkanFunctions_Static()
  12935 {
  12936     // Vulkan 1.0
  12937     m_VulkanFunctions.vkGetInstanceProcAddr = (PFN_vkGetInstanceProcAddr)vkGetInstanceProcAddr;
  12938     m_VulkanFunctions.vkGetDeviceProcAddr = (PFN_vkGetDeviceProcAddr)vkGetDeviceProcAddr;
  12939     m_VulkanFunctions.vkGetPhysicalDeviceProperties = (PFN_vkGetPhysicalDeviceProperties)vkGetPhysicalDeviceProperties;
  12940     m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties = (PFN_vkGetPhysicalDeviceMemoryProperties)vkGetPhysicalDeviceMemoryProperties;
  12941     m_VulkanFunctions.vkAllocateMemory = (PFN_vkAllocateMemory)vkAllocateMemory;
  12942     m_VulkanFunctions.vkFreeMemory = (PFN_vkFreeMemory)vkFreeMemory;
  12943     m_VulkanFunctions.vkMapMemory = (PFN_vkMapMemory)vkMapMemory;
  12944     m_VulkanFunctions.vkUnmapMemory = (PFN_vkUnmapMemory)vkUnmapMemory;
  12945     m_VulkanFunctions.vkFlushMappedMemoryRanges = (PFN_vkFlushMappedMemoryRanges)vkFlushMappedMemoryRanges;
  12946     m_VulkanFunctions.vkInvalidateMappedMemoryRanges = (PFN_vkInvalidateMappedMemoryRanges)vkInvalidateMappedMemoryRanges;
  12947     m_VulkanFunctions.vkBindBufferMemory = (PFN_vkBindBufferMemory)vkBindBufferMemory;
  12948     m_VulkanFunctions.vkBindImageMemory = (PFN_vkBindImageMemory)vkBindImageMemory;
  12949     m_VulkanFunctions.vkGetBufferMemoryRequirements = (PFN_vkGetBufferMemoryRequirements)vkGetBufferMemoryRequirements;
  12950     m_VulkanFunctions.vkGetImageMemoryRequirements = (PFN_vkGetImageMemoryRequirements)vkGetImageMemoryRequirements;
  12951     m_VulkanFunctions.vkCreateBuffer = (PFN_vkCreateBuffer)vkCreateBuffer;
  12952     m_VulkanFunctions.vkDestroyBuffer = (PFN_vkDestroyBuffer)vkDestroyBuffer;
  12953     m_VulkanFunctions.vkCreateImage = (PFN_vkCreateImage)vkCreateImage;
  12954     m_VulkanFunctions.vkDestroyImage = (PFN_vkDestroyImage)vkDestroyImage;
  12955     m_VulkanFunctions.vkCmdCopyBuffer = (PFN_vkCmdCopyBuffer)vkCmdCopyBuffer;
  12956 
  12957     // Vulkan 1.1
  12958 #if VMA_VULKAN_VERSION >= 1001000
  12959     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
  12960     {
  12961         m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR = (PFN_vkGetBufferMemoryRequirements2)vkGetBufferMemoryRequirements2;
  12962         m_VulkanFunctions.vkGetImageMemoryRequirements2KHR = (PFN_vkGetImageMemoryRequirements2)vkGetImageMemoryRequirements2;
  12963         m_VulkanFunctions.vkBindBufferMemory2KHR = (PFN_vkBindBufferMemory2)vkBindBufferMemory2;
  12964         m_VulkanFunctions.vkBindImageMemory2KHR = (PFN_vkBindImageMemory2)vkBindImageMemory2;
  12965     }
  12966 #endif
  12967 
  12968 #if VMA_VULKAN_VERSION >= 1001000
  12969     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
  12970     {
  12971         m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties2KHR = (PFN_vkGetPhysicalDeviceMemoryProperties2)vkGetPhysicalDeviceMemoryProperties2;
  12972     }
  12973 #endif
  12974 
  12975 #if VMA_VULKAN_VERSION >= 1003000
  12976     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 3, 0))
  12977     {
  12978         m_VulkanFunctions.vkGetDeviceBufferMemoryRequirements = (PFN_vkGetDeviceBufferMemoryRequirements)vkGetDeviceBufferMemoryRequirements;
  12979         m_VulkanFunctions.vkGetDeviceImageMemoryRequirements = (PFN_vkGetDeviceImageMemoryRequirements)vkGetDeviceImageMemoryRequirements;
  12980     }
  12981 #endif
  12982 }
  12983 
  12984 #endif // VMA_STATIC_VULKAN_FUNCTIONS == 1
  12985 
  12986 void VmaAllocator_T::ImportVulkanFunctions_Custom(const VmaVulkanFunctions* pVulkanFunctions)
  12987 {
  12988     VMA_ASSERT(pVulkanFunctions != VMA_NULL);
  12989 
  12990 #define VMA_COPY_IF_NOT_NULL(funcName) \
  12991     if(pVulkanFunctions->funcName != VMA_NULL) m_VulkanFunctions.funcName = pVulkanFunctions->funcName;
  12992 
  12993     VMA_COPY_IF_NOT_NULL(vkGetInstanceProcAddr);
  12994     VMA_COPY_IF_NOT_NULL(vkGetDeviceProcAddr);
  12995     VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceProperties);
  12996     VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties);
  12997     VMA_COPY_IF_NOT_NULL(vkAllocateMemory);
  12998     VMA_COPY_IF_NOT_NULL(vkFreeMemory);
  12999     VMA_COPY_IF_NOT_NULL(vkMapMemory);
  13000     VMA_COPY_IF_NOT_NULL(vkUnmapMemory);
  13001     VMA_COPY_IF_NOT_NULL(vkFlushMappedMemoryRanges);
  13002     VMA_COPY_IF_NOT_NULL(vkInvalidateMappedMemoryRanges);
  13003     VMA_COPY_IF_NOT_NULL(vkBindBufferMemory);
  13004     VMA_COPY_IF_NOT_NULL(vkBindImageMemory);
  13005     VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements);
  13006     VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements);
  13007     VMA_COPY_IF_NOT_NULL(vkCreateBuffer);
  13008     VMA_COPY_IF_NOT_NULL(vkDestroyBuffer);
  13009     VMA_COPY_IF_NOT_NULL(vkCreateImage);
  13010     VMA_COPY_IF_NOT_NULL(vkDestroyImage);
  13011     VMA_COPY_IF_NOT_NULL(vkCmdCopyBuffer);
  13012 
  13013 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
  13014     VMA_COPY_IF_NOT_NULL(vkGetBufferMemoryRequirements2KHR);
  13015     VMA_COPY_IF_NOT_NULL(vkGetImageMemoryRequirements2KHR);
  13016 #endif
  13017 
  13018 #if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
  13019     VMA_COPY_IF_NOT_NULL(vkBindBufferMemory2KHR);
  13020     VMA_COPY_IF_NOT_NULL(vkBindImageMemory2KHR);
  13021 #endif
  13022 
  13023 #if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
  13024     VMA_COPY_IF_NOT_NULL(vkGetPhysicalDeviceMemoryProperties2KHR);
  13025 #endif
  13026 
  13027 #if VMA_KHR_MAINTENANCE4 || VMA_VULKAN_VERSION >= 1003000
  13028     VMA_COPY_IF_NOT_NULL(vkGetDeviceBufferMemoryRequirements);
  13029     VMA_COPY_IF_NOT_NULL(vkGetDeviceImageMemoryRequirements);
  13030 #endif
  13031 
  13032 #undef VMA_COPY_IF_NOT_NULL
  13033 }
  13034 
  13035 #if VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
  13036 
  13037 void VmaAllocator_T::ImportVulkanFunctions_Dynamic()
  13038 {
  13039     VMA_ASSERT(m_VulkanFunctions.vkGetInstanceProcAddr && m_VulkanFunctions.vkGetDeviceProcAddr &&
  13040         "To use VMA_DYNAMIC_VULKAN_FUNCTIONS in new versions of VMA you now have to pass "
  13041         "VmaVulkanFunctions::vkGetInstanceProcAddr and vkGetDeviceProcAddr as VmaAllocatorCreateInfo::pVulkanFunctions. "
  13042         "Other members can be null.");
  13043 
  13044 #define VMA_FETCH_INSTANCE_FUNC(memberName, functionPointerType, functionNameString) \
  13045     if(m_VulkanFunctions.memberName == VMA_NULL) \
  13046         m_VulkanFunctions.memberName = \
  13047             (functionPointerType)m_VulkanFunctions.vkGetInstanceProcAddr(m_hInstance, functionNameString);
  13048 #define VMA_FETCH_DEVICE_FUNC(memberName, functionPointerType, functionNameString) \
  13049     if(m_VulkanFunctions.memberName == VMA_NULL) \
  13050         m_VulkanFunctions.memberName = \
  13051             (functionPointerType)m_VulkanFunctions.vkGetDeviceProcAddr(m_hDevice, functionNameString);
  13052 
  13053     VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceProperties, PFN_vkGetPhysicalDeviceProperties, "vkGetPhysicalDeviceProperties");
  13054     VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties, PFN_vkGetPhysicalDeviceMemoryProperties, "vkGetPhysicalDeviceMemoryProperties");
  13055     VMA_FETCH_DEVICE_FUNC(vkAllocateMemory, PFN_vkAllocateMemory, "vkAllocateMemory");
  13056     VMA_FETCH_DEVICE_FUNC(vkFreeMemory, PFN_vkFreeMemory, "vkFreeMemory");
  13057     VMA_FETCH_DEVICE_FUNC(vkMapMemory, PFN_vkMapMemory, "vkMapMemory");
  13058     VMA_FETCH_DEVICE_FUNC(vkUnmapMemory, PFN_vkUnmapMemory, "vkUnmapMemory");
  13059     VMA_FETCH_DEVICE_FUNC(vkFlushMappedMemoryRanges, PFN_vkFlushMappedMemoryRanges, "vkFlushMappedMemoryRanges");
  13060     VMA_FETCH_DEVICE_FUNC(vkInvalidateMappedMemoryRanges, PFN_vkInvalidateMappedMemoryRanges, "vkInvalidateMappedMemoryRanges");
  13061     VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory, PFN_vkBindBufferMemory, "vkBindBufferMemory");
  13062     VMA_FETCH_DEVICE_FUNC(vkBindImageMemory, PFN_vkBindImageMemory, "vkBindImageMemory");
  13063     VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements, PFN_vkGetBufferMemoryRequirements, "vkGetBufferMemoryRequirements");
  13064     VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements, PFN_vkGetImageMemoryRequirements, "vkGetImageMemoryRequirements");
  13065     VMA_FETCH_DEVICE_FUNC(vkCreateBuffer, PFN_vkCreateBuffer, "vkCreateBuffer");
  13066     VMA_FETCH_DEVICE_FUNC(vkDestroyBuffer, PFN_vkDestroyBuffer, "vkDestroyBuffer");
  13067     VMA_FETCH_DEVICE_FUNC(vkCreateImage, PFN_vkCreateImage, "vkCreateImage");
  13068     VMA_FETCH_DEVICE_FUNC(vkDestroyImage, PFN_vkDestroyImage, "vkDestroyImage");
  13069     VMA_FETCH_DEVICE_FUNC(vkCmdCopyBuffer, PFN_vkCmdCopyBuffer, "vkCmdCopyBuffer");
  13070 
  13071 #if VMA_VULKAN_VERSION >= 1001000
  13072     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
  13073     {
  13074         VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements2KHR, PFN_vkGetBufferMemoryRequirements2, "vkGetBufferMemoryRequirements2");
  13075         VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements2KHR, PFN_vkGetImageMemoryRequirements2, "vkGetImageMemoryRequirements2");
  13076         VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory2KHR, PFN_vkBindBufferMemory2, "vkBindBufferMemory2");
  13077         VMA_FETCH_DEVICE_FUNC(vkBindImageMemory2KHR, PFN_vkBindImageMemory2, "vkBindImageMemory2");
  13078     }
  13079 #endif
  13080 
  13081 #if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
  13082     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
  13083     {
  13084         VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR, "vkGetPhysicalDeviceMemoryProperties2");
  13085     }
  13086     else if(m_UseExtMemoryBudget)
  13087     {
  13088         VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR, "vkGetPhysicalDeviceMemoryProperties2KHR");
  13089     }
  13090 #endif
  13091 
  13092 #if VMA_DEDICATED_ALLOCATION
  13093     if(m_UseKhrDedicatedAllocation)
  13094     {
  13095         VMA_FETCH_DEVICE_FUNC(vkGetBufferMemoryRequirements2KHR, PFN_vkGetBufferMemoryRequirements2KHR, "vkGetBufferMemoryRequirements2KHR");
  13096         VMA_FETCH_DEVICE_FUNC(vkGetImageMemoryRequirements2KHR, PFN_vkGetImageMemoryRequirements2KHR, "vkGetImageMemoryRequirements2KHR");
  13097     }
  13098 #endif
  13099 
  13100 #if VMA_BIND_MEMORY2
  13101     if(m_UseKhrBindMemory2)
  13102     {
  13103         VMA_FETCH_DEVICE_FUNC(vkBindBufferMemory2KHR, PFN_vkBindBufferMemory2KHR, "vkBindBufferMemory2KHR");
  13104         VMA_FETCH_DEVICE_FUNC(vkBindImageMemory2KHR, PFN_vkBindImageMemory2KHR, "vkBindImageMemory2KHR");
  13105     }
  13106 #endif // #if VMA_BIND_MEMORY2
  13107 
  13108 #if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
  13109     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
  13110     {
  13111         VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR, "vkGetPhysicalDeviceMemoryProperties2");
  13112     }
  13113     else if(m_UseExtMemoryBudget)
  13114     {
  13115         VMA_FETCH_INSTANCE_FUNC(vkGetPhysicalDeviceMemoryProperties2KHR, PFN_vkGetPhysicalDeviceMemoryProperties2KHR, "vkGetPhysicalDeviceMemoryProperties2KHR");
  13116     }
  13117 #endif // #if VMA_MEMORY_BUDGET
  13118 
  13119 #if VMA_VULKAN_VERSION >= 1003000
  13120     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 3, 0))
  13121     {
  13122         VMA_FETCH_DEVICE_FUNC(vkGetDeviceBufferMemoryRequirements, PFN_vkGetDeviceBufferMemoryRequirements, "vkGetDeviceBufferMemoryRequirements");
  13123         VMA_FETCH_DEVICE_FUNC(vkGetDeviceImageMemoryRequirements, PFN_vkGetDeviceImageMemoryRequirements, "vkGetDeviceImageMemoryRequirements");
  13124     }
  13125 #endif
  13126 #if VMA_KHR_MAINTENANCE4
  13127     if(m_UseKhrMaintenance4)
  13128     {
  13129         VMA_FETCH_DEVICE_FUNC(vkGetDeviceBufferMemoryRequirements, PFN_vkGetDeviceBufferMemoryRequirementsKHR, "vkGetDeviceBufferMemoryRequirementsKHR");
  13130         VMA_FETCH_DEVICE_FUNC(vkGetDeviceImageMemoryRequirements, PFN_vkGetDeviceImageMemoryRequirementsKHR, "vkGetDeviceImageMemoryRequirementsKHR");
  13131     }
  13132 #endif
  13133 
  13134 #undef VMA_FETCH_DEVICE_FUNC
  13135 #undef VMA_FETCH_INSTANCE_FUNC
  13136 }
  13137 
  13138 #endif // VMA_DYNAMIC_VULKAN_FUNCTIONS == 1
  13139 
  13140 void VmaAllocator_T::ValidateVulkanFunctions()
  13141 {
  13142     VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceProperties != VMA_NULL);
  13143     VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties != VMA_NULL);
  13144     VMA_ASSERT(m_VulkanFunctions.vkAllocateMemory != VMA_NULL);
  13145     VMA_ASSERT(m_VulkanFunctions.vkFreeMemory != VMA_NULL);
  13146     VMA_ASSERT(m_VulkanFunctions.vkMapMemory != VMA_NULL);
  13147     VMA_ASSERT(m_VulkanFunctions.vkUnmapMemory != VMA_NULL);
  13148     VMA_ASSERT(m_VulkanFunctions.vkFlushMappedMemoryRanges != VMA_NULL);
  13149     VMA_ASSERT(m_VulkanFunctions.vkInvalidateMappedMemoryRanges != VMA_NULL);
  13150     VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory != VMA_NULL);
  13151     VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory != VMA_NULL);
  13152     VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements != VMA_NULL);
  13153     VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements != VMA_NULL);
  13154     VMA_ASSERT(m_VulkanFunctions.vkCreateBuffer != VMA_NULL);
  13155     VMA_ASSERT(m_VulkanFunctions.vkDestroyBuffer != VMA_NULL);
  13156     VMA_ASSERT(m_VulkanFunctions.vkCreateImage != VMA_NULL);
  13157     VMA_ASSERT(m_VulkanFunctions.vkDestroyImage != VMA_NULL);
  13158     VMA_ASSERT(m_VulkanFunctions.vkCmdCopyBuffer != VMA_NULL);
  13159 
  13160 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
  13161     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0) || m_UseKhrDedicatedAllocation)
  13162     {
  13163         VMA_ASSERT(m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR != VMA_NULL);
  13164         VMA_ASSERT(m_VulkanFunctions.vkGetImageMemoryRequirements2KHR != VMA_NULL);
  13165     }
  13166 #endif
  13167 
  13168 #if VMA_BIND_MEMORY2 || VMA_VULKAN_VERSION >= 1001000
  13169     if(m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0) || m_UseKhrBindMemory2)
  13170     {
  13171         VMA_ASSERT(m_VulkanFunctions.vkBindBufferMemory2KHR != VMA_NULL);
  13172         VMA_ASSERT(m_VulkanFunctions.vkBindImageMemory2KHR != VMA_NULL);
  13173     }
  13174 #endif
  13175 
  13176 #if VMA_MEMORY_BUDGET || VMA_VULKAN_VERSION >= 1001000
  13177     if(m_UseExtMemoryBudget || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
  13178     {
  13179         VMA_ASSERT(m_VulkanFunctions.vkGetPhysicalDeviceMemoryProperties2KHR != VMA_NULL);
  13180     }
  13181 #endif
  13182 
  13183     // Not validating these due to suspected driver bugs with these function
  13184     // pointers being null despite correct extension or Vulkan version is enabled.
  13185     // See issue #397. Their usage in VMA is optional anyway.
  13186     //
  13187     // VMA_ASSERT(m_VulkanFunctions.vkGetDeviceBufferMemoryRequirements != VMA_NULL);
  13188     // VMA_ASSERT(m_VulkanFunctions.vkGetDeviceImageMemoryRequirements != VMA_NULL);
  13189 }
  13190 
  13191 VkDeviceSize VmaAllocator_T::CalcPreferredBlockSize(uint32_t memTypeIndex)
  13192 {
  13193     const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
  13194     const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
  13195     const bool isSmallHeap = heapSize <= VMA_SMALL_HEAP_MAX_SIZE;
  13196     return VmaAlignUp(isSmallHeap ? (heapSize / 8) : m_PreferredLargeHeapBlockSize, (VkDeviceSize)32);
  13197 }
  13198 
  13199 VkResult VmaAllocator_T::AllocateMemoryOfType(
  13200     VmaPool pool,
  13201     VkDeviceSize size,
  13202     VkDeviceSize alignment,
  13203     bool dedicatedPreferred,
  13204     VkBuffer dedicatedBuffer,
  13205     VkImage dedicatedImage,
  13206     VmaBufferImageUsage dedicatedBufferImageUsage,
  13207     const VmaAllocationCreateInfo& createInfo,
  13208     uint32_t memTypeIndex,
  13209     VmaSuballocationType suballocType,
  13210     VmaDedicatedAllocationList& dedicatedAllocations,
  13211     VmaBlockVector& blockVector,
  13212     size_t allocationCount,
  13213     VmaAllocation* pAllocations)
  13214 {
  13215     VMA_ASSERT(pAllocations != VMA_NULL);
  13216     VMA_DEBUG_LOG_FORMAT("  AllocateMemory: MemoryTypeIndex=%" PRIu32 ", AllocationCount=%zu, Size=%" PRIu64, memTypeIndex, allocationCount, size);
  13217 
  13218     VmaAllocationCreateInfo finalCreateInfo = createInfo;
  13219     VkResult res = CalcMemTypeParams(
  13220         finalCreateInfo,
  13221         memTypeIndex,
  13222         size,
  13223         allocationCount);
  13224     if(res != VK_SUCCESS)
  13225         return res;
  13226 
  13227     if((finalCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0)
  13228     {
  13229         return AllocateDedicatedMemory(
  13230             pool,
  13231             size,
  13232             suballocType,
  13233             dedicatedAllocations,
  13234             memTypeIndex,
  13235             (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
  13236             (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
  13237             (finalCreateInfo.flags &
  13238                 (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0,
  13239             (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT) != 0,
  13240             finalCreateInfo.pUserData,
  13241             finalCreateInfo.priority,
  13242             dedicatedBuffer,
  13243             dedicatedImage,
  13244             dedicatedBufferImageUsage,
  13245             allocationCount,
  13246             pAllocations,
  13247             blockVector.GetAllocationNextPtr());
  13248     }
  13249     else
  13250     {
  13251         const bool canAllocateDedicated =
  13252             (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) == 0 &&
  13253             (pool == VK_NULL_HANDLE || !blockVector.HasExplicitBlockSize());
  13254 
  13255         if(canAllocateDedicated)
  13256         {
  13257             // Heuristics: Allocate dedicated memory if requested size if greater than half of preferred block size.
  13258             if(size > blockVector.GetPreferredBlockSize() / 2)
  13259             {
  13260                 dedicatedPreferred = true;
  13261             }
  13262             // Protection against creating each allocation as dedicated when we reach or exceed heap size/budget,
  13263             // which can quickly deplete maxMemoryAllocationCount: Don't prefer dedicated allocations when above
  13264             // 3/4 of the maximum allocation count.
  13265             if(m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount < UINT32_MAX / 4 &&
  13266                 m_DeviceMemoryCount.load() > m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount * 3 / 4)
  13267             {
  13268                 dedicatedPreferred = false;
  13269             }
  13270 
  13271             if(dedicatedPreferred)
  13272             {
  13273                 res = AllocateDedicatedMemory(
  13274                     pool,
  13275                     size,
  13276                     suballocType,
  13277                     dedicatedAllocations,
  13278                     memTypeIndex,
  13279                     (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
  13280                     (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
  13281                     (finalCreateInfo.flags &
  13282                         (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0,
  13283                     (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT) != 0,
  13284                     finalCreateInfo.pUserData,
  13285                     finalCreateInfo.priority,
  13286                     dedicatedBuffer,
  13287                     dedicatedImage,
  13288                     dedicatedBufferImageUsage,
  13289                     allocationCount,
  13290                     pAllocations,
  13291                     blockVector.GetAllocationNextPtr());
  13292                 if(res == VK_SUCCESS)
  13293                 {
  13294                     // Succeeded: AllocateDedicatedMemory function already filled pMemory, nothing more to do here.
  13295                     VMA_DEBUG_LOG("    Allocated as DedicatedMemory");
  13296                     return VK_SUCCESS;
  13297                 }
  13298             }
  13299         }
  13300 
  13301         res = blockVector.Allocate(
  13302             size,
  13303             alignment,
  13304             finalCreateInfo,
  13305             suballocType,
  13306             allocationCount,
  13307             pAllocations);
  13308         if(res == VK_SUCCESS)
  13309             return VK_SUCCESS;
  13310 
  13311         // Try dedicated memory.
  13312         if(canAllocateDedicated && !dedicatedPreferred)
  13313         {
  13314             res = AllocateDedicatedMemory(
  13315                 pool,
  13316                 size,
  13317                 suballocType,
  13318                 dedicatedAllocations,
  13319                 memTypeIndex,
  13320                 (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0,
  13321                 (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT) != 0,
  13322                 (finalCreateInfo.flags &
  13323                     (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0,
  13324                 (finalCreateInfo.flags & VMA_ALLOCATION_CREATE_CAN_ALIAS_BIT) != 0,
  13325                 finalCreateInfo.pUserData,
  13326                 finalCreateInfo.priority,
  13327                 dedicatedBuffer,
  13328                 dedicatedImage,
  13329                 dedicatedBufferImageUsage,
  13330                 allocationCount,
  13331                 pAllocations,
  13332                 blockVector.GetAllocationNextPtr());
  13333             if(res == VK_SUCCESS)
  13334             {
  13335                 // Succeeded: AllocateDedicatedMemory function already filled pMemory, nothing more to do here.
  13336                 VMA_DEBUG_LOG("    Allocated as DedicatedMemory");
  13337                 return VK_SUCCESS;
  13338             }
  13339         }
  13340         // Everything failed: Return error code.
  13341         VMA_DEBUG_LOG("    vkAllocateMemory FAILED");
  13342         return res;
  13343     }
  13344 }
  13345 
  13346 VkResult VmaAllocator_T::AllocateDedicatedMemory(
  13347     VmaPool pool,
  13348     VkDeviceSize size,
  13349     VmaSuballocationType suballocType,
  13350     VmaDedicatedAllocationList& dedicatedAllocations,
  13351     uint32_t memTypeIndex,
  13352     bool map,
  13353     bool isUserDataString,
  13354     bool isMappingAllowed,
  13355     bool canAliasMemory,
  13356     void* pUserData,
  13357     float priority,
  13358     VkBuffer dedicatedBuffer,
  13359     VkImage dedicatedImage,
  13360     VmaBufferImageUsage dedicatedBufferImageUsage,
  13361     size_t allocationCount,
  13362     VmaAllocation* pAllocations,
  13363     const void* pNextChain)
  13364 {
  13365     VMA_ASSERT(allocationCount > 0 && pAllocations);
  13366 
  13367     VkMemoryAllocateInfo allocInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO };
  13368     allocInfo.memoryTypeIndex = memTypeIndex;
  13369     allocInfo.allocationSize = size;
  13370     allocInfo.pNext = pNextChain;
  13371 
  13372 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
  13373     VkMemoryDedicatedAllocateInfoKHR dedicatedAllocInfo = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO_KHR };
  13374     if(!canAliasMemory)
  13375     {
  13376         if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
  13377         {
  13378             if(dedicatedBuffer != VK_NULL_HANDLE)
  13379             {
  13380                 VMA_ASSERT(dedicatedImage == VK_NULL_HANDLE);
  13381                 dedicatedAllocInfo.buffer = dedicatedBuffer;
  13382                 VmaPnextChainPushFront(&allocInfo, &dedicatedAllocInfo);
  13383             }
  13384             else if(dedicatedImage != VK_NULL_HANDLE)
  13385             {
  13386                 dedicatedAllocInfo.image = dedicatedImage;
  13387                 VmaPnextChainPushFront(&allocInfo, &dedicatedAllocInfo);
  13388             }
  13389         }
  13390     }
  13391 #endif // #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
  13392 
  13393 #if VMA_BUFFER_DEVICE_ADDRESS
  13394     VkMemoryAllocateFlagsInfoKHR allocFlagsInfo = { VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_FLAGS_INFO_KHR };
  13395     if(m_UseKhrBufferDeviceAddress)
  13396     {
  13397         bool canContainBufferWithDeviceAddress = true;
  13398         if(dedicatedBuffer != VK_NULL_HANDLE)
  13399         {
  13400             canContainBufferWithDeviceAddress = dedicatedBufferImageUsage == VmaBufferImageUsage::UNKNOWN ||
  13401                 dedicatedBufferImageUsage.Contains(VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_EXT);
  13402         }
  13403         else if(dedicatedImage != VK_NULL_HANDLE)
  13404         {
  13405             canContainBufferWithDeviceAddress = false;
  13406         }
  13407         if(canContainBufferWithDeviceAddress)
  13408         {
  13409             allocFlagsInfo.flags = VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT_KHR;
  13410             VmaPnextChainPushFront(&allocInfo, &allocFlagsInfo);
  13411         }
  13412     }
  13413 #endif // #if VMA_BUFFER_DEVICE_ADDRESS
  13414 
  13415 #if VMA_MEMORY_PRIORITY
  13416     VkMemoryPriorityAllocateInfoEXT priorityInfo = { VK_STRUCTURE_TYPE_MEMORY_PRIORITY_ALLOCATE_INFO_EXT };
  13417     if(m_UseExtMemoryPriority)
  13418     {
  13419         VMA_ASSERT(priority >= 0.f && priority <= 1.f);
  13420         priorityInfo.priority = priority;
  13421         VmaPnextChainPushFront(&allocInfo, &priorityInfo);
  13422     }
  13423 #endif // #if VMA_MEMORY_PRIORITY
  13424 
  13425 #if VMA_EXTERNAL_MEMORY
  13426     // Attach VkExportMemoryAllocateInfoKHR if necessary.
  13427     VkExportMemoryAllocateInfoKHR exportMemoryAllocInfo = { VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO_KHR };
  13428     exportMemoryAllocInfo.handleTypes = GetExternalMemoryHandleTypeFlags(memTypeIndex);
  13429     if(exportMemoryAllocInfo.handleTypes != 0)
  13430     {
  13431         VmaPnextChainPushFront(&allocInfo, &exportMemoryAllocInfo);
  13432     }
  13433 #endif // #if VMA_EXTERNAL_MEMORY
  13434 
  13435     size_t allocIndex;
  13436     VkResult res = VK_SUCCESS;
  13437     for(allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
  13438     {
  13439         res = AllocateDedicatedMemoryPage(
  13440             pool,
  13441             size,
  13442             suballocType,
  13443             memTypeIndex,
  13444             allocInfo,
  13445             map,
  13446             isUserDataString,
  13447             isMappingAllowed,
  13448             pUserData,
  13449             pAllocations + allocIndex);
  13450         if(res != VK_SUCCESS)
  13451         {
  13452             break;
  13453         }
  13454     }
  13455 
  13456     if(res == VK_SUCCESS)
  13457     {
  13458         for (allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
  13459         {
  13460             dedicatedAllocations.Register(pAllocations[allocIndex]);
  13461         }
  13462         VMA_DEBUG_LOG_FORMAT("    Allocated DedicatedMemory Count=%zu, MemoryTypeIndex=#%" PRIu32, allocationCount, memTypeIndex);
  13463     }
  13464     else
  13465     {
  13466         // Free all already created allocations.
  13467         while(allocIndex--)
  13468         {
  13469             VmaAllocation currAlloc = pAllocations[allocIndex];
  13470             VkDeviceMemory hMemory = currAlloc->GetMemory();
  13471 
  13472             /*
  13473             There is no need to call this, because Vulkan spec allows to skip vkUnmapMemory
  13474             before vkFreeMemory.
  13475 
  13476             if(currAlloc->GetMappedData() != VMA_NULL)
  13477             {
  13478                 (*m_VulkanFunctions.vkUnmapMemory)(m_hDevice, hMemory);
  13479             }
  13480             */
  13481 
  13482             FreeVulkanMemory(memTypeIndex, currAlloc->GetSize(), hMemory);
  13483             m_Budget.RemoveAllocation(MemoryTypeIndexToHeapIndex(memTypeIndex), currAlloc->GetSize());
  13484             m_AllocationObjectAllocator.Free(currAlloc);
  13485         }
  13486 
  13487         memset(pAllocations, 0, sizeof(VmaAllocation) * allocationCount);
  13488     }
  13489 
  13490     return res;
  13491 }
  13492 
  13493 VkResult VmaAllocator_T::AllocateDedicatedMemoryPage(
  13494     VmaPool pool,
  13495     VkDeviceSize size,
  13496     VmaSuballocationType suballocType,
  13497     uint32_t memTypeIndex,
  13498     const VkMemoryAllocateInfo& allocInfo,
  13499     bool map,
  13500     bool isUserDataString,
  13501     bool isMappingAllowed,
  13502     void* pUserData,
  13503     VmaAllocation* pAllocation)
  13504 {
  13505     VkDeviceMemory hMemory = VK_NULL_HANDLE;
  13506     VkResult res = AllocateVulkanMemory(&allocInfo, &hMemory);
  13507     if(res < 0)
  13508     {
  13509         VMA_DEBUG_LOG("    vkAllocateMemory FAILED");
  13510         return res;
  13511     }
  13512 
  13513     void* pMappedData = VMA_NULL;
  13514     if(map)
  13515     {
  13516         res = (*m_VulkanFunctions.vkMapMemory)(
  13517             m_hDevice,
  13518             hMemory,
  13519             0,
  13520             VK_WHOLE_SIZE,
  13521             0,
  13522             &pMappedData);
  13523         if(res < 0)
  13524         {
  13525             VMA_DEBUG_LOG("    vkMapMemory FAILED");
  13526             FreeVulkanMemory(memTypeIndex, size, hMemory);
  13527             return res;
  13528         }
  13529     }
  13530 
  13531     *pAllocation = m_AllocationObjectAllocator.Allocate(isMappingAllowed);
  13532     (*pAllocation)->InitDedicatedAllocation(pool, memTypeIndex, hMemory, suballocType, pMappedData, size);
  13533     if (isUserDataString)
  13534         (*pAllocation)->SetName(this, (const char*)pUserData);
  13535     else
  13536         (*pAllocation)->SetUserData(this, pUserData);
  13537     m_Budget.AddAllocation(MemoryTypeIndexToHeapIndex(memTypeIndex), size);
  13538     if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
  13539     {
  13540         FillAllocation(*pAllocation, VMA_ALLOCATION_FILL_PATTERN_CREATED);
  13541     }
  13542 
  13543     return VK_SUCCESS;
  13544 }
  13545 
  13546 void VmaAllocator_T::GetBufferMemoryRequirements(
  13547     VkBuffer hBuffer,
  13548     VkMemoryRequirements& memReq,
  13549     bool& requiresDedicatedAllocation,
  13550     bool& prefersDedicatedAllocation) const
  13551 {
  13552 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
  13553     if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
  13554     {
  13555         VkBufferMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_REQUIREMENTS_INFO_2_KHR };
  13556         memReqInfo.buffer = hBuffer;
  13557 
  13558         VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
  13559 
  13560         VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
  13561         VmaPnextChainPushFront(&memReq2, &memDedicatedReq);
  13562 
  13563         (*m_VulkanFunctions.vkGetBufferMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
  13564 
  13565         memReq = memReq2.memoryRequirements;
  13566         requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
  13567         prefersDedicatedAllocation  = (memDedicatedReq.prefersDedicatedAllocation  != VK_FALSE);
  13568     }
  13569     else
  13570 #endif // #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
  13571     {
  13572         (*m_VulkanFunctions.vkGetBufferMemoryRequirements)(m_hDevice, hBuffer, &memReq);
  13573         requiresDedicatedAllocation = false;
  13574         prefersDedicatedAllocation  = false;
  13575     }
  13576 }
  13577 
  13578 void VmaAllocator_T::GetImageMemoryRequirements(
  13579     VkImage hImage,
  13580     VkMemoryRequirements& memReq,
  13581     bool& requiresDedicatedAllocation,
  13582     bool& prefersDedicatedAllocation) const
  13583 {
  13584 #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
  13585     if(m_UseKhrDedicatedAllocation || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0))
  13586     {
  13587         VkImageMemoryRequirementsInfo2KHR memReqInfo = { VK_STRUCTURE_TYPE_IMAGE_MEMORY_REQUIREMENTS_INFO_2_KHR };
  13588         memReqInfo.image = hImage;
  13589 
  13590         VkMemoryDedicatedRequirementsKHR memDedicatedReq = { VK_STRUCTURE_TYPE_MEMORY_DEDICATED_REQUIREMENTS_KHR };
  13591 
  13592         VkMemoryRequirements2KHR memReq2 = { VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2_KHR };
  13593         VmaPnextChainPushFront(&memReq2, &memDedicatedReq);
  13594 
  13595         (*m_VulkanFunctions.vkGetImageMemoryRequirements2KHR)(m_hDevice, &memReqInfo, &memReq2);
  13596 
  13597         memReq = memReq2.memoryRequirements;
  13598         requiresDedicatedAllocation = (memDedicatedReq.requiresDedicatedAllocation != VK_FALSE);
  13599         prefersDedicatedAllocation  = (memDedicatedReq.prefersDedicatedAllocation  != VK_FALSE);
  13600     }
  13601     else
  13602 #endif // #if VMA_DEDICATED_ALLOCATION || VMA_VULKAN_VERSION >= 1001000
  13603     {
  13604         (*m_VulkanFunctions.vkGetImageMemoryRequirements)(m_hDevice, hImage, &memReq);
  13605         requiresDedicatedAllocation = false;
  13606         prefersDedicatedAllocation  = false;
  13607     }
  13608 }
  13609 
  13610 VkResult VmaAllocator_T::FindMemoryTypeIndex(
  13611     uint32_t memoryTypeBits,
  13612     const VmaAllocationCreateInfo* pAllocationCreateInfo,
  13613     VmaBufferImageUsage bufImgUsage,
  13614     uint32_t* pMemoryTypeIndex) const
  13615 {
  13616     memoryTypeBits &= GetGlobalMemoryTypeBits();
  13617 
  13618     if(pAllocationCreateInfo->memoryTypeBits != 0)
  13619     {
  13620         memoryTypeBits &= pAllocationCreateInfo->memoryTypeBits;
  13621     }
  13622 
  13623     VkMemoryPropertyFlags requiredFlags = 0, preferredFlags = 0, notPreferredFlags = 0;
  13624     if(!FindMemoryPreferences(
  13625         IsIntegratedGpu(),
  13626         *pAllocationCreateInfo,
  13627         bufImgUsage,
  13628         requiredFlags, preferredFlags, notPreferredFlags))
  13629     {
  13630         return VK_ERROR_FEATURE_NOT_PRESENT;
  13631     }
  13632 
  13633     *pMemoryTypeIndex = UINT32_MAX;
  13634     uint32_t minCost = UINT32_MAX;
  13635     for(uint32_t memTypeIndex = 0, memTypeBit = 1;
  13636         memTypeIndex < GetMemoryTypeCount();
  13637         ++memTypeIndex, memTypeBit <<= 1)
  13638     {
  13639         // This memory type is acceptable according to memoryTypeBits bitmask.
  13640         if((memTypeBit & memoryTypeBits) != 0)
  13641         {
  13642             const VkMemoryPropertyFlags currFlags =
  13643                 m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
  13644             // This memory type contains requiredFlags.
  13645             if((requiredFlags & ~currFlags) == 0)
  13646             {
  13647                 // Calculate cost as number of bits from preferredFlags not present in this memory type.
  13648                 uint32_t currCost = VMA_COUNT_BITS_SET(preferredFlags & ~currFlags) +
  13649                     VMA_COUNT_BITS_SET(currFlags & notPreferredFlags);
  13650                 // Remember memory type with lowest cost.
  13651                 if(currCost < minCost)
  13652                 {
  13653                     *pMemoryTypeIndex = memTypeIndex;
  13654                     if(currCost == 0)
  13655                     {
  13656                         return VK_SUCCESS;
  13657                     }
  13658                     minCost = currCost;
  13659                 }
  13660             }
  13661         }
  13662     }
  13663     return (*pMemoryTypeIndex != UINT32_MAX) ? VK_SUCCESS : VK_ERROR_FEATURE_NOT_PRESENT;
  13664 }
  13665 
  13666 VkResult VmaAllocator_T::CalcMemTypeParams(
  13667     VmaAllocationCreateInfo& inoutCreateInfo,
  13668     uint32_t memTypeIndex,
  13669     VkDeviceSize size,
  13670     size_t allocationCount)
  13671 {
  13672     // If memory type is not HOST_VISIBLE, disable MAPPED.
  13673     if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0 &&
  13674         (m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
  13675     {
  13676         inoutCreateInfo.flags &= ~VMA_ALLOCATION_CREATE_MAPPED_BIT;
  13677     }
  13678 
  13679     if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0 &&
  13680         (inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT) != 0)
  13681     {
  13682         const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memTypeIndex);
  13683         VmaBudget heapBudget = {};
  13684         GetHeapBudgets(&heapBudget, heapIndex, 1);
  13685         if(heapBudget.usage + size * allocationCount > heapBudget.budget)
  13686         {
  13687             return VK_ERROR_OUT_OF_DEVICE_MEMORY;
  13688         }
  13689     }
  13690     return VK_SUCCESS;
  13691 }
  13692 
  13693 VkResult VmaAllocator_T::CalcAllocationParams(
  13694     VmaAllocationCreateInfo& inoutCreateInfo,
  13695     bool dedicatedRequired,
  13696     bool dedicatedPreferred)
  13697 {
  13698     VMA_ASSERT((inoutCreateInfo.flags &
  13699         (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) !=
  13700         (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT) &&
  13701         "Specifying both flags VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT and VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT is incorrect.");
  13702     VMA_ASSERT((((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT) == 0 ||
  13703         (inoutCreateInfo.flags & (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0)) &&
  13704         "Specifying VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT requires also VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.");
  13705     if(inoutCreateInfo.usage == VMA_MEMORY_USAGE_AUTO || inoutCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE || inoutCreateInfo.usage == VMA_MEMORY_USAGE_AUTO_PREFER_HOST)
  13706     {
  13707         if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_MAPPED_BIT) != 0)
  13708         {
  13709             VMA_ASSERT((inoutCreateInfo.flags & (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) != 0 &&
  13710                 "When using VMA_ALLOCATION_CREATE_MAPPED_BIT and usage = VMA_MEMORY_USAGE_AUTO*, you must also specify VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.");
  13711         }
  13712     }
  13713 
  13714     // If memory is lazily allocated, it should be always dedicated.
  13715     if(dedicatedRequired ||
  13716         inoutCreateInfo.usage == VMA_MEMORY_USAGE_GPU_LAZILY_ALLOCATED)
  13717     {
  13718         inoutCreateInfo.flags |= VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
  13719     }
  13720 
  13721     if(inoutCreateInfo.pool != VK_NULL_HANDLE)
  13722     {
  13723         if(inoutCreateInfo.pool->m_BlockVector.HasExplicitBlockSize() &&
  13724             (inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0)
  13725         {
  13726             VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT while current custom pool doesn't support dedicated allocations.");
  13727             return VK_ERROR_FEATURE_NOT_PRESENT;
  13728         }
  13729         inoutCreateInfo.priority = inoutCreateInfo.pool->m_BlockVector.GetPriority();
  13730     }
  13731 
  13732     if((inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT) != 0 &&
  13733         (inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
  13734     {
  13735         VMA_ASSERT(0 && "Specifying VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT together with VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT makes no sense.");
  13736         return VK_ERROR_FEATURE_NOT_PRESENT;
  13737     }
  13738 
  13739     if(VMA_DEBUG_ALWAYS_DEDICATED_MEMORY &&
  13740         (inoutCreateInfo.flags & VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT) != 0)
  13741     {
  13742         inoutCreateInfo.flags |= VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
  13743     }
  13744 
  13745     // Non-auto USAGE values imply HOST_ACCESS flags.
  13746     // And so does VMA_MEMORY_USAGE_UNKNOWN because it is used with custom pools.
  13747     // Which specific flag is used doesn't matter. They change things only when used with VMA_MEMORY_USAGE_AUTO*.
  13748     // Otherwise they just protect from assert on mapping.
  13749     if(inoutCreateInfo.usage != VMA_MEMORY_USAGE_AUTO &&
  13750         inoutCreateInfo.usage != VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE &&
  13751         inoutCreateInfo.usage != VMA_MEMORY_USAGE_AUTO_PREFER_HOST)
  13752     {
  13753         if((inoutCreateInfo.flags & (VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT | VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT)) == 0)
  13754         {
  13755             inoutCreateInfo.flags |= VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT;
  13756         }
  13757     }
  13758 
  13759     return VK_SUCCESS;
  13760 }
  13761 
  13762 VkResult VmaAllocator_T::AllocateMemory(
  13763     const VkMemoryRequirements& vkMemReq,
  13764     bool requiresDedicatedAllocation,
  13765     bool prefersDedicatedAllocation,
  13766     VkBuffer dedicatedBuffer,
  13767     VkImage dedicatedImage,
  13768     VmaBufferImageUsage dedicatedBufferImageUsage,
  13769     const VmaAllocationCreateInfo& createInfo,
  13770     VmaSuballocationType suballocType,
  13771     size_t allocationCount,
  13772     VmaAllocation* pAllocations)
  13773 {
  13774     memset(pAllocations, 0, sizeof(VmaAllocation) * allocationCount);
  13775 
  13776     VMA_ASSERT(VmaIsPow2(vkMemReq.alignment));
  13777 
  13778     if(vkMemReq.size == 0)
  13779     {
  13780         return VK_ERROR_INITIALIZATION_FAILED;
  13781     }
  13782 
  13783     VmaAllocationCreateInfo createInfoFinal = createInfo;
  13784     VkResult res = CalcAllocationParams(createInfoFinal, requiresDedicatedAllocation, prefersDedicatedAllocation);
  13785     if(res != VK_SUCCESS)
  13786         return res;
  13787 
  13788     if(createInfoFinal.pool != VK_NULL_HANDLE)
  13789     {
  13790         VmaBlockVector& blockVector = createInfoFinal.pool->m_BlockVector;
  13791         return AllocateMemoryOfType(
  13792             createInfoFinal.pool,
  13793             vkMemReq.size,
  13794             vkMemReq.alignment,
  13795             prefersDedicatedAllocation,
  13796             dedicatedBuffer,
  13797             dedicatedImage,
  13798             dedicatedBufferImageUsage,
  13799             createInfoFinal,
  13800             blockVector.GetMemoryTypeIndex(),
  13801             suballocType,
  13802             createInfoFinal.pool->m_DedicatedAllocations,
  13803             blockVector,
  13804             allocationCount,
  13805             pAllocations);
  13806     }
  13807     else
  13808     {
  13809         // Bit mask of memory Vulkan types acceptable for this allocation.
  13810         uint32_t memoryTypeBits = vkMemReq.memoryTypeBits;
  13811         uint32_t memTypeIndex = UINT32_MAX;
  13812         res = FindMemoryTypeIndex(memoryTypeBits, &createInfoFinal, dedicatedBufferImageUsage, &memTypeIndex);
  13813         // Can't find any single memory type matching requirements. res is VK_ERROR_FEATURE_NOT_PRESENT.
  13814         if(res != VK_SUCCESS)
  13815             return res;
  13816         do
  13817         {
  13818             VmaBlockVector* blockVector = m_pBlockVectors[memTypeIndex];
  13819             VMA_ASSERT(blockVector && "Trying to use unsupported memory type!");
  13820             res = AllocateMemoryOfType(
  13821                 VK_NULL_HANDLE,
  13822                 vkMemReq.size,
  13823                 vkMemReq.alignment,
  13824                 requiresDedicatedAllocation || prefersDedicatedAllocation,
  13825                 dedicatedBuffer,
  13826                 dedicatedImage,
  13827                 dedicatedBufferImageUsage,
  13828                 createInfoFinal,
  13829                 memTypeIndex,
  13830                 suballocType,
  13831                 m_DedicatedAllocations[memTypeIndex],
  13832                 *blockVector,
  13833                 allocationCount,
  13834                 pAllocations);
  13835             // Allocation succeeded
  13836             if(res == VK_SUCCESS)
  13837                 return VK_SUCCESS;
  13838 
  13839             // Remove old memTypeIndex from list of possibilities.
  13840             memoryTypeBits &= ~(1u << memTypeIndex);
  13841             // Find alternative memTypeIndex.
  13842             res = FindMemoryTypeIndex(memoryTypeBits, &createInfoFinal, dedicatedBufferImageUsage, &memTypeIndex);
  13843         } while(res == VK_SUCCESS);
  13844 
  13845         // No other matching memory type index could be found.
  13846         // Not returning res, which is VK_ERROR_FEATURE_NOT_PRESENT, because we already failed to allocate once.
  13847         return VK_ERROR_OUT_OF_DEVICE_MEMORY;
  13848     }
  13849 }
  13850 
  13851 void VmaAllocator_T::FreeMemory(
  13852     size_t allocationCount,
  13853     const VmaAllocation* pAllocations)
  13854 {
  13855     VMA_ASSERT(pAllocations);
  13856 
  13857     for(size_t allocIndex = allocationCount; allocIndex--; )
  13858     {
  13859         VmaAllocation allocation = pAllocations[allocIndex];
  13860 
  13861         if(allocation != VK_NULL_HANDLE)
  13862         {
  13863             if(VMA_DEBUG_INITIALIZE_ALLOCATIONS)
  13864             {
  13865                 FillAllocation(allocation, VMA_ALLOCATION_FILL_PATTERN_DESTROYED);
  13866             }
  13867 
  13868             allocation->FreeName(this);
  13869 
  13870             switch(allocation->GetType())
  13871             {
  13872             case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
  13873                 {
  13874                     VmaBlockVector* pBlockVector = VMA_NULL;
  13875                     VmaPool hPool = allocation->GetParentPool();
  13876                     if(hPool != VK_NULL_HANDLE)
  13877                     {
  13878                         pBlockVector = &hPool->m_BlockVector;
  13879                     }
  13880                     else
  13881                     {
  13882                         const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
  13883                         pBlockVector = m_pBlockVectors[memTypeIndex];
  13884                         VMA_ASSERT(pBlockVector && "Trying to free memory of unsupported type!");
  13885                     }
  13886                     pBlockVector->Free(allocation);
  13887                 }
  13888                 break;
  13889             case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
  13890                 FreeDedicatedMemory(allocation);
  13891                 break;
  13892             default:
  13893                 VMA_ASSERT(0);
  13894             }
  13895         }
  13896     }
  13897 }
  13898 
  13899 void VmaAllocator_T::CalculateStatistics(VmaTotalStatistics* pStats)
  13900 {
  13901     // Initialize.
  13902     VmaClearDetailedStatistics(pStats->total);
  13903     for(uint32_t i = 0; i < VK_MAX_MEMORY_TYPES; ++i)
  13904         VmaClearDetailedStatistics(pStats->memoryType[i]);
  13905     for(uint32_t i = 0; i < VK_MAX_MEMORY_HEAPS; ++i)
  13906         VmaClearDetailedStatistics(pStats->memoryHeap[i]);
  13907 
  13908     // Process default pools.
  13909     for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
  13910     {
  13911         VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
  13912         if (pBlockVector != VMA_NULL)
  13913             pBlockVector->AddDetailedStatistics(pStats->memoryType[memTypeIndex]);
  13914     }
  13915 
  13916     // Process custom pools.
  13917     {
  13918         VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
  13919         for(VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
  13920         {
  13921             VmaBlockVector& blockVector = pool->m_BlockVector;
  13922             const uint32_t memTypeIndex = blockVector.GetMemoryTypeIndex();
  13923             blockVector.AddDetailedStatistics(pStats->memoryType[memTypeIndex]);
  13924             pool->m_DedicatedAllocations.AddDetailedStatistics(pStats->memoryType[memTypeIndex]);
  13925         }
  13926     }
  13927 
  13928     // Process dedicated allocations.
  13929     for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
  13930     {
  13931         m_DedicatedAllocations[memTypeIndex].AddDetailedStatistics(pStats->memoryType[memTypeIndex]);
  13932     }
  13933 
  13934     // Sum from memory types to memory heaps.
  13935     for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
  13936     {
  13937         const uint32_t memHeapIndex = m_MemProps.memoryTypes[memTypeIndex].heapIndex;
  13938         VmaAddDetailedStatistics(pStats->memoryHeap[memHeapIndex], pStats->memoryType[memTypeIndex]);
  13939     }
  13940 
  13941     // Sum from memory heaps to total.
  13942     for(uint32_t memHeapIndex = 0; memHeapIndex < GetMemoryHeapCount(); ++memHeapIndex)
  13943         VmaAddDetailedStatistics(pStats->total, pStats->memoryHeap[memHeapIndex]);
  13944 
  13945     VMA_ASSERT(pStats->total.statistics.allocationCount == 0 ||
  13946         pStats->total.allocationSizeMax >= pStats->total.allocationSizeMin);
  13947     VMA_ASSERT(pStats->total.unusedRangeCount == 0 ||
  13948         pStats->total.unusedRangeSizeMax >= pStats->total.unusedRangeSizeMin);
  13949 }
  13950 
  13951 void VmaAllocator_T::GetHeapBudgets(VmaBudget* outBudgets, uint32_t firstHeap, uint32_t heapCount)
  13952 {
  13953 #if VMA_MEMORY_BUDGET
  13954     if(m_UseExtMemoryBudget)
  13955     {
  13956         if(m_Budget.m_OperationsSinceBudgetFetch < 30)
  13957         {
  13958             VmaMutexLockRead lockRead(m_Budget.m_BudgetMutex, m_UseMutex);
  13959             for(uint32_t i = 0; i < heapCount; ++i, ++outBudgets)
  13960             {
  13961                 const uint32_t heapIndex = firstHeap + i;
  13962 
  13963                 outBudgets->statistics.blockCount = m_Budget.m_BlockCount[heapIndex];
  13964                 outBudgets->statistics.allocationCount = m_Budget.m_AllocationCount[heapIndex];
  13965                 outBudgets->statistics.blockBytes = m_Budget.m_BlockBytes[heapIndex];
  13966                 outBudgets->statistics.allocationBytes = m_Budget.m_AllocationBytes[heapIndex];
  13967 
  13968                 if(m_Budget.m_VulkanUsage[heapIndex] + outBudgets->statistics.blockBytes > m_Budget.m_BlockBytesAtBudgetFetch[heapIndex])
  13969                 {
  13970                     outBudgets->usage = m_Budget.m_VulkanUsage[heapIndex] +
  13971                         outBudgets->statistics.blockBytes - m_Budget.m_BlockBytesAtBudgetFetch[heapIndex];
  13972                 }
  13973                 else
  13974                 {
  13975                     outBudgets->usage = 0;
  13976                 }
  13977 
  13978                 // Have to take MIN with heap size because explicit HeapSizeLimit is included in it.
  13979                 outBudgets->budget = VMA_MIN(
  13980                     m_Budget.m_VulkanBudget[heapIndex], m_MemProps.memoryHeaps[heapIndex].size);
  13981             }
  13982         }
  13983         else
  13984         {
  13985             UpdateVulkanBudget(); // Outside of mutex lock
  13986             GetHeapBudgets(outBudgets, firstHeap, heapCount); // Recursion
  13987         }
  13988     }
  13989     else
  13990 #endif
  13991     {
  13992         for(uint32_t i = 0; i < heapCount; ++i, ++outBudgets)
  13993         {
  13994             const uint32_t heapIndex = firstHeap + i;
  13995 
  13996             outBudgets->statistics.blockCount = m_Budget.m_BlockCount[heapIndex];
  13997             outBudgets->statistics.allocationCount = m_Budget.m_AllocationCount[heapIndex];
  13998             outBudgets->statistics.blockBytes = m_Budget.m_BlockBytes[heapIndex];
  13999             outBudgets->statistics.allocationBytes = m_Budget.m_AllocationBytes[heapIndex];
  14000 
  14001             outBudgets->usage = outBudgets->statistics.blockBytes;
  14002             outBudgets->budget = m_MemProps.memoryHeaps[heapIndex].size * 8 / 10; // 80% heuristics.
  14003         }
  14004     }
  14005 }
  14006 
  14007 void VmaAllocator_T::GetAllocationInfo(VmaAllocation hAllocation, VmaAllocationInfo* pAllocationInfo)
  14008 {
  14009     pAllocationInfo->memoryType = hAllocation->GetMemoryTypeIndex();
  14010     pAllocationInfo->deviceMemory = hAllocation->GetMemory();
  14011     pAllocationInfo->offset = hAllocation->GetOffset();
  14012     pAllocationInfo->size = hAllocation->GetSize();
  14013     pAllocationInfo->pMappedData = hAllocation->GetMappedData();
  14014     pAllocationInfo->pUserData = hAllocation->GetUserData();
  14015     pAllocationInfo->pName = hAllocation->GetName();
  14016 }
  14017 
  14018 void VmaAllocator_T::GetAllocationInfo2(VmaAllocation hAllocation, VmaAllocationInfo2* pAllocationInfo)
  14019 {
  14020     GetAllocationInfo(hAllocation, &pAllocationInfo->allocationInfo);
  14021 
  14022     switch (hAllocation->GetType())
  14023     {
  14024     case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
  14025         pAllocationInfo->blockSize = hAllocation->GetBlock()->m_pMetadata->GetSize();
  14026         pAllocationInfo->dedicatedMemory = VK_FALSE;
  14027         break;
  14028     case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
  14029         pAllocationInfo->blockSize = pAllocationInfo->allocationInfo.size;
  14030         pAllocationInfo->dedicatedMemory = VK_TRUE;
  14031         break;
  14032     default:
  14033         VMA_ASSERT(0);
  14034     }
  14035 }
  14036 
  14037 VkResult VmaAllocator_T::CreatePool(const VmaPoolCreateInfo* pCreateInfo, VmaPool* pPool)
  14038 {
  14039     VMA_DEBUG_LOG_FORMAT("  CreatePool: MemoryTypeIndex=%" PRIu32 ", flags=%" PRIu32, pCreateInfo->memoryTypeIndex, pCreateInfo->flags);
  14040 
  14041     VmaPoolCreateInfo newCreateInfo = *pCreateInfo;
  14042 
  14043     // Protection against uninitialized new structure member. If garbage data are left there, this pointer dereference would crash.
  14044     if(pCreateInfo->pMemoryAllocateNext)
  14045     {
  14046         VMA_ASSERT(((const VkBaseInStructure*)pCreateInfo->pMemoryAllocateNext)->sType != 0);
  14047     }
  14048 
  14049     if(newCreateInfo.maxBlockCount == 0)
  14050     {
  14051         newCreateInfo.maxBlockCount = SIZE_MAX;
  14052     }
  14053     if(newCreateInfo.minBlockCount > newCreateInfo.maxBlockCount)
  14054     {
  14055         return VK_ERROR_INITIALIZATION_FAILED;
  14056     }
  14057     // Memory type index out of range or forbidden.
  14058     if(pCreateInfo->memoryTypeIndex >= GetMemoryTypeCount() ||
  14059         ((1u << pCreateInfo->memoryTypeIndex) & m_GlobalMemoryTypeBits) == 0)
  14060     {
  14061         return VK_ERROR_FEATURE_NOT_PRESENT;
  14062     }
  14063     if(newCreateInfo.minAllocationAlignment > 0)
  14064     {
  14065         VMA_ASSERT(VmaIsPow2(newCreateInfo.minAllocationAlignment));
  14066     }
  14067 
  14068     const VkDeviceSize preferredBlockSize = CalcPreferredBlockSize(newCreateInfo.memoryTypeIndex);
  14069 
  14070     *pPool = vma_new(this, VmaPool_T)(this, newCreateInfo, preferredBlockSize);
  14071 
  14072     VkResult res = (*pPool)->m_BlockVector.CreateMinBlocks();
  14073     if(res != VK_SUCCESS)
  14074     {
  14075         vma_delete(this, *pPool);
  14076         *pPool = VMA_NULL;
  14077         return res;
  14078     }
  14079 
  14080     // Add to m_Pools.
  14081     {
  14082         VmaMutexLockWrite lock(m_PoolsMutex, m_UseMutex);
  14083         (*pPool)->SetId(m_NextPoolId++);
  14084         m_Pools.PushBack(*pPool);
  14085     }
  14086 
  14087     return VK_SUCCESS;
  14088 }
  14089 
  14090 void VmaAllocator_T::DestroyPool(VmaPool pool)
  14091 {
  14092     // Remove from m_Pools.
  14093     {
  14094         VmaMutexLockWrite lock(m_PoolsMutex, m_UseMutex);
  14095         m_Pools.Remove(pool);
  14096     }
  14097 
  14098     vma_delete(this, pool);
  14099 }
  14100 
  14101 void VmaAllocator_T::GetPoolStatistics(VmaPool pool, VmaStatistics* pPoolStats)
  14102 {
  14103     VmaClearStatistics(*pPoolStats);
  14104     pool->m_BlockVector.AddStatistics(*pPoolStats);
  14105     pool->m_DedicatedAllocations.AddStatistics(*pPoolStats);
  14106 }
  14107 
  14108 void VmaAllocator_T::CalculatePoolStatistics(VmaPool pool, VmaDetailedStatistics* pPoolStats)
  14109 {
  14110     VmaClearDetailedStatistics(*pPoolStats);
  14111     pool->m_BlockVector.AddDetailedStatistics(*pPoolStats);
  14112     pool->m_DedicatedAllocations.AddDetailedStatistics(*pPoolStats);
  14113 }
  14114 
  14115 void VmaAllocator_T::SetCurrentFrameIndex(uint32_t frameIndex)
  14116 {
  14117     m_CurrentFrameIndex.store(frameIndex);
  14118 
  14119 #if VMA_MEMORY_BUDGET
  14120     if(m_UseExtMemoryBudget)
  14121     {
  14122         UpdateVulkanBudget();
  14123     }
  14124 #endif // #if VMA_MEMORY_BUDGET
  14125 }
  14126 
  14127 VkResult VmaAllocator_T::CheckPoolCorruption(VmaPool hPool)
  14128 {
  14129     return hPool->m_BlockVector.CheckCorruption();
  14130 }
  14131 
  14132 VkResult VmaAllocator_T::CheckCorruption(uint32_t memoryTypeBits)
  14133 {
  14134     VkResult finalRes = VK_ERROR_FEATURE_NOT_PRESENT;
  14135 
  14136     // Process default pools.
  14137     for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
  14138     {
  14139         VmaBlockVector* const pBlockVector = m_pBlockVectors[memTypeIndex];
  14140         if(pBlockVector != VMA_NULL)
  14141         {
  14142             VkResult localRes = pBlockVector->CheckCorruption();
  14143             switch(localRes)
  14144             {
  14145             case VK_ERROR_FEATURE_NOT_PRESENT:
  14146                 break;
  14147             case VK_SUCCESS:
  14148                 finalRes = VK_SUCCESS;
  14149                 break;
  14150             default:
  14151                 return localRes;
  14152             }
  14153         }
  14154     }
  14155 
  14156     // Process custom pools.
  14157     {
  14158         VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
  14159         for(VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
  14160         {
  14161             if(((1u << pool->m_BlockVector.GetMemoryTypeIndex()) & memoryTypeBits) != 0)
  14162             {
  14163                 VkResult localRes = pool->m_BlockVector.CheckCorruption();
  14164                 switch(localRes)
  14165                 {
  14166                 case VK_ERROR_FEATURE_NOT_PRESENT:
  14167                     break;
  14168                 case VK_SUCCESS:
  14169                     finalRes = VK_SUCCESS;
  14170                     break;
  14171                 default:
  14172                     return localRes;
  14173                 }
  14174             }
  14175         }
  14176     }
  14177 
  14178     return finalRes;
  14179 }
  14180 
  14181 VkResult VmaAllocator_T::AllocateVulkanMemory(const VkMemoryAllocateInfo* pAllocateInfo, VkDeviceMemory* pMemory)
  14182 {
  14183     AtomicTransactionalIncrement<VMA_ATOMIC_UINT32> deviceMemoryCountIncrement;
  14184     const uint64_t prevDeviceMemoryCount = deviceMemoryCountIncrement.Increment(&m_DeviceMemoryCount);
  14185 #if VMA_DEBUG_DONT_EXCEED_MAX_MEMORY_ALLOCATION_COUNT
  14186     if(prevDeviceMemoryCount >= m_PhysicalDeviceProperties.limits.maxMemoryAllocationCount)
  14187     {
  14188         return VK_ERROR_TOO_MANY_OBJECTS;
  14189     }
  14190 #endif
  14191 
  14192     const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(pAllocateInfo->memoryTypeIndex);
  14193 
  14194     // HeapSizeLimit is in effect for this heap.
  14195     if((m_HeapSizeLimitMask & (1u << heapIndex)) != 0)
  14196     {
  14197         const VkDeviceSize heapSize = m_MemProps.memoryHeaps[heapIndex].size;
  14198         VkDeviceSize blockBytes = m_Budget.m_BlockBytes[heapIndex];
  14199         for(;;)
  14200         {
  14201             const VkDeviceSize blockBytesAfterAllocation = blockBytes + pAllocateInfo->allocationSize;
  14202             if(blockBytesAfterAllocation > heapSize)
  14203             {
  14204                 return VK_ERROR_OUT_OF_DEVICE_MEMORY;
  14205             }
  14206             if(m_Budget.m_BlockBytes[heapIndex].compare_exchange_strong(blockBytes, blockBytesAfterAllocation))
  14207             {
  14208                 break;
  14209             }
  14210         }
  14211     }
  14212     else
  14213     {
  14214         m_Budget.m_BlockBytes[heapIndex] += pAllocateInfo->allocationSize;
  14215     }
  14216     ++m_Budget.m_BlockCount[heapIndex];
  14217 
  14218     // VULKAN CALL vkAllocateMemory.
  14219     VkResult res = (*m_VulkanFunctions.vkAllocateMemory)(m_hDevice, pAllocateInfo, GetAllocationCallbacks(), pMemory);
  14220 
  14221     if(res == VK_SUCCESS)
  14222     {
  14223 #if VMA_MEMORY_BUDGET
  14224         ++m_Budget.m_OperationsSinceBudgetFetch;
  14225 #endif
  14226 
  14227         // Informative callback.
  14228         if(m_DeviceMemoryCallbacks.pfnAllocate != VMA_NULL)
  14229         {
  14230             (*m_DeviceMemoryCallbacks.pfnAllocate)(this, pAllocateInfo->memoryTypeIndex, *pMemory, pAllocateInfo->allocationSize, m_DeviceMemoryCallbacks.pUserData);
  14231         }
  14232 
  14233         deviceMemoryCountIncrement.Commit();
  14234     }
  14235     else
  14236     {
  14237         --m_Budget.m_BlockCount[heapIndex];
  14238         m_Budget.m_BlockBytes[heapIndex] -= pAllocateInfo->allocationSize;
  14239     }
  14240 
  14241     return res;
  14242 }
  14243 
  14244 void VmaAllocator_T::FreeVulkanMemory(uint32_t memoryType, VkDeviceSize size, VkDeviceMemory hMemory)
  14245 {
  14246     // Informative callback.
  14247     if(m_DeviceMemoryCallbacks.pfnFree != VMA_NULL)
  14248     {
  14249         (*m_DeviceMemoryCallbacks.pfnFree)(this, memoryType, hMemory, size, m_DeviceMemoryCallbacks.pUserData);
  14250     }
  14251 
  14252     // VULKAN CALL vkFreeMemory.
  14253     (*m_VulkanFunctions.vkFreeMemory)(m_hDevice, hMemory, GetAllocationCallbacks());
  14254 
  14255     const uint32_t heapIndex = MemoryTypeIndexToHeapIndex(memoryType);
  14256     --m_Budget.m_BlockCount[heapIndex];
  14257     m_Budget.m_BlockBytes[heapIndex] -= size;
  14258 
  14259     --m_DeviceMemoryCount;
  14260 }
  14261 
  14262 VkResult VmaAllocator_T::BindVulkanBuffer(
  14263     VkDeviceMemory memory,
  14264     VkDeviceSize memoryOffset,
  14265     VkBuffer buffer,
  14266     const void* pNext)
  14267 {
  14268     if(pNext != VMA_NULL)
  14269     {
  14270 #if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
  14271         if((m_UseKhrBindMemory2 || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0)) &&
  14272             m_VulkanFunctions.vkBindBufferMemory2KHR != VMA_NULL)
  14273         {
  14274             VkBindBufferMemoryInfoKHR bindBufferMemoryInfo = { VK_STRUCTURE_TYPE_BIND_BUFFER_MEMORY_INFO_KHR };
  14275             bindBufferMemoryInfo.pNext = pNext;
  14276             bindBufferMemoryInfo.buffer = buffer;
  14277             bindBufferMemoryInfo.memory = memory;
  14278             bindBufferMemoryInfo.memoryOffset = memoryOffset;
  14279             return (*m_VulkanFunctions.vkBindBufferMemory2KHR)(m_hDevice, 1, &bindBufferMemoryInfo);
  14280         }
  14281         else
  14282 #endif // #if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
  14283         {
  14284             return VK_ERROR_EXTENSION_NOT_PRESENT;
  14285         }
  14286     }
  14287     else
  14288     {
  14289         return (*m_VulkanFunctions.vkBindBufferMemory)(m_hDevice, buffer, memory, memoryOffset);
  14290     }
  14291 }
  14292 
  14293 VkResult VmaAllocator_T::BindVulkanImage(
  14294     VkDeviceMemory memory,
  14295     VkDeviceSize memoryOffset,
  14296     VkImage image,
  14297     const void* pNext)
  14298 {
  14299     if(pNext != VMA_NULL)
  14300     {
  14301 #if VMA_VULKAN_VERSION >= 1001000 || VMA_BIND_MEMORY2
  14302         if((m_UseKhrBindMemory2 || m_VulkanApiVersion >= VK_MAKE_VERSION(1, 1, 0)) &&
  14303             m_VulkanFunctions.vkBindImageMemory2KHR != VMA_NULL)
  14304         {
  14305             VkBindImageMemoryInfoKHR bindBufferMemoryInfo = { VK_STRUCTURE_TYPE_BIND_IMAGE_MEMORY_INFO_KHR };
  14306             bindBufferMemoryInfo.pNext = pNext;
  14307             bindBufferMemoryInfo.image = image;
  14308             bindBufferMemoryInfo.memory = memory;
  14309             bindBufferMemoryInfo.memoryOffset = memoryOffset;
  14310             return (*m_VulkanFunctions.vkBindImageMemory2KHR)(m_hDevice, 1, &bindBufferMemoryInfo);
  14311         }
  14312         else
  14313 #endif // #if VMA_BIND_MEMORY2
  14314         {
  14315             return VK_ERROR_EXTENSION_NOT_PRESENT;
  14316         }
  14317     }
  14318     else
  14319     {
  14320         return (*m_VulkanFunctions.vkBindImageMemory)(m_hDevice, image, memory, memoryOffset);
  14321     }
  14322 }
  14323 
  14324 VkResult VmaAllocator_T::Map(VmaAllocation hAllocation, void** ppData)
  14325 {
  14326     switch(hAllocation->GetType())
  14327     {
  14328     case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
  14329         {
  14330             VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
  14331             char *pBytes = VMA_NULL;
  14332             VkResult res = pBlock->Map(this, 1, (void**)&pBytes);
  14333             if(res == VK_SUCCESS)
  14334             {
  14335                 *ppData = pBytes + (ptrdiff_t)hAllocation->GetOffset();
  14336                 hAllocation->BlockAllocMap();
  14337             }
  14338             return res;
  14339         }
  14340     case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
  14341         return hAllocation->DedicatedAllocMap(this, ppData);
  14342     default:
  14343         VMA_ASSERT(0);
  14344         return VK_ERROR_MEMORY_MAP_FAILED;
  14345     }
  14346 }
  14347 
  14348 void VmaAllocator_T::Unmap(VmaAllocation hAllocation)
  14349 {
  14350     switch(hAllocation->GetType())
  14351     {
  14352     case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
  14353         {
  14354             VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
  14355             hAllocation->BlockAllocUnmap();
  14356             pBlock->Unmap(this, 1);
  14357         }
  14358         break;
  14359     case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
  14360         hAllocation->DedicatedAllocUnmap(this);
  14361         break;
  14362     default:
  14363         VMA_ASSERT(0);
  14364     }
  14365 }
  14366 
  14367 VkResult VmaAllocator_T::BindBufferMemory(
  14368     VmaAllocation hAllocation,
  14369     VkDeviceSize allocationLocalOffset,
  14370     VkBuffer hBuffer,
  14371     const void* pNext)
  14372 {
  14373     VkResult res = VK_ERROR_UNKNOWN_COPY;
  14374     switch(hAllocation->GetType())
  14375     {
  14376     case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
  14377         res = BindVulkanBuffer(hAllocation->GetMemory(), allocationLocalOffset, hBuffer, pNext);
  14378         break;
  14379     case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
  14380     {
  14381         VmaDeviceMemoryBlock* const pBlock = hAllocation->GetBlock();
  14382         VMA_ASSERT(pBlock && "Binding buffer to allocation that doesn't belong to any block.");
  14383         res = pBlock->BindBufferMemory(this, hAllocation, allocationLocalOffset, hBuffer, pNext);
  14384         break;
  14385     }
  14386     default:
  14387         VMA_ASSERT(0);
  14388     }
  14389     return res;
  14390 }
  14391 
  14392 VkResult VmaAllocator_T::BindImageMemory(
  14393     VmaAllocation hAllocation,
  14394     VkDeviceSize allocationLocalOffset,
  14395     VkImage hImage,
  14396     const void* pNext)
  14397 {
  14398     VkResult res = VK_ERROR_UNKNOWN_COPY;
  14399     switch(hAllocation->GetType())
  14400     {
  14401     case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
  14402         res = BindVulkanImage(hAllocation->GetMemory(), allocationLocalOffset, hImage, pNext);
  14403         break;
  14404     case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
  14405     {
  14406         VmaDeviceMemoryBlock* pBlock = hAllocation->GetBlock();
  14407         VMA_ASSERT(pBlock && "Binding image to allocation that doesn't belong to any block.");
  14408         res = pBlock->BindImageMemory(this, hAllocation, allocationLocalOffset, hImage, pNext);
  14409         break;
  14410     }
  14411     default:
  14412         VMA_ASSERT(0);
  14413     }
  14414     return res;
  14415 }
  14416 
  14417 VkResult VmaAllocator_T::FlushOrInvalidateAllocation(
  14418     VmaAllocation hAllocation,
  14419     VkDeviceSize offset, VkDeviceSize size,
  14420     VMA_CACHE_OPERATION op)
  14421 {
  14422     VkResult res = VK_SUCCESS;
  14423 
  14424     VkMappedMemoryRange memRange = {};
  14425     if(GetFlushOrInvalidateRange(hAllocation, offset, size, memRange))
  14426     {
  14427         switch(op)
  14428         {
  14429         case VMA_CACHE_FLUSH:
  14430             res = (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, 1, &memRange);
  14431             break;
  14432         case VMA_CACHE_INVALIDATE:
  14433             res = (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, 1, &memRange);
  14434             break;
  14435         default:
  14436             VMA_ASSERT(0);
  14437         }
  14438     }
  14439     // else: Just ignore this call.
  14440     return res;
  14441 }
  14442 
  14443 VkResult VmaAllocator_T::FlushOrInvalidateAllocations(
  14444     uint32_t allocationCount,
  14445     const VmaAllocation* allocations,
  14446     const VkDeviceSize* offsets, const VkDeviceSize* sizes,
  14447     VMA_CACHE_OPERATION op)
  14448 {
  14449     typedef VmaStlAllocator<VkMappedMemoryRange> RangeAllocator;
  14450     typedef VmaSmallVector<VkMappedMemoryRange, RangeAllocator, 16> RangeVector;
  14451     RangeVector ranges = RangeVector(RangeAllocator(GetAllocationCallbacks()));
  14452 
  14453     for(uint32_t allocIndex = 0; allocIndex < allocationCount; ++allocIndex)
  14454     {
  14455         const VmaAllocation alloc = allocations[allocIndex];
  14456         const VkDeviceSize offset = offsets != VMA_NULL ? offsets[allocIndex] : 0;
  14457         const VkDeviceSize size = sizes != VMA_NULL ? sizes[allocIndex] : VK_WHOLE_SIZE;
  14458         VkMappedMemoryRange newRange;
  14459         if(GetFlushOrInvalidateRange(alloc, offset, size, newRange))
  14460         {
  14461             ranges.push_back(newRange);
  14462         }
  14463     }
  14464 
  14465     VkResult res = VK_SUCCESS;
  14466     if(!ranges.empty())
  14467     {
  14468         switch(op)
  14469         {
  14470         case VMA_CACHE_FLUSH:
  14471             res = (*GetVulkanFunctions().vkFlushMappedMemoryRanges)(m_hDevice, (uint32_t)ranges.size(), ranges.data());
  14472             break;
  14473         case VMA_CACHE_INVALIDATE:
  14474             res = (*GetVulkanFunctions().vkInvalidateMappedMemoryRanges)(m_hDevice, (uint32_t)ranges.size(), ranges.data());
  14475             break;
  14476         default:
  14477             VMA_ASSERT(0);
  14478         }
  14479     }
  14480     // else: Just ignore this call.
  14481     return res;
  14482 }
  14483 
  14484 VkResult VmaAllocator_T::CopyMemoryToAllocation(
  14485     const void* pSrcHostPointer,
  14486     VmaAllocation dstAllocation,
  14487     VkDeviceSize dstAllocationLocalOffset,
  14488     VkDeviceSize size)
  14489 {
  14490     void* dstMappedData = VMA_NULL;
  14491     VkResult res = Map(dstAllocation, &dstMappedData);
  14492     if(res == VK_SUCCESS)
  14493     {
  14494         memcpy((char*)dstMappedData + dstAllocationLocalOffset, pSrcHostPointer, (size_t)size);
  14495         Unmap(dstAllocation);
  14496         res = FlushOrInvalidateAllocation(dstAllocation, dstAllocationLocalOffset, size, VMA_CACHE_FLUSH);
  14497     }
  14498     return res;
  14499 }
  14500 
  14501 VkResult VmaAllocator_T::CopyAllocationToMemory(
  14502     VmaAllocation srcAllocation,
  14503     VkDeviceSize srcAllocationLocalOffset,
  14504     void* pDstHostPointer,
  14505     VkDeviceSize size)
  14506 {
  14507     void* srcMappedData = VMA_NULL;
  14508     VkResult res = Map(srcAllocation, &srcMappedData);
  14509     if(res == VK_SUCCESS)
  14510     {
  14511         res = FlushOrInvalidateAllocation(srcAllocation, srcAllocationLocalOffset, size, VMA_CACHE_INVALIDATE);
  14512         if(res == VK_SUCCESS)
  14513         {
  14514             memcpy(pDstHostPointer, (const char*)srcMappedData + srcAllocationLocalOffset, (size_t)size);
  14515             Unmap(srcAllocation);
  14516         }
  14517     }
  14518     return res;
  14519 }
  14520 
  14521 void VmaAllocator_T::FreeDedicatedMemory(const VmaAllocation allocation)
  14522 {
  14523     VMA_ASSERT(allocation && allocation->GetType() == VmaAllocation_T::ALLOCATION_TYPE_DEDICATED);
  14524 
  14525     const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
  14526     VmaPool parentPool = allocation->GetParentPool();
  14527     if(parentPool == VK_NULL_HANDLE)
  14528     {
  14529         // Default pool
  14530         m_DedicatedAllocations[memTypeIndex].Unregister(allocation);
  14531     }
  14532     else
  14533     {
  14534         // Custom pool
  14535         parentPool->m_DedicatedAllocations.Unregister(allocation);
  14536     }
  14537 
  14538     VkDeviceMemory hMemory = allocation->GetMemory();
  14539 
  14540     /*
  14541     There is no need to call this, because Vulkan spec allows to skip vkUnmapMemory
  14542     before vkFreeMemory.
  14543 
  14544     if(allocation->GetMappedData() != VMA_NULL)
  14545     {
  14546         (*m_VulkanFunctions.vkUnmapMemory)(m_hDevice, hMemory);
  14547     }
  14548     */
  14549 
  14550     FreeVulkanMemory(memTypeIndex, allocation->GetSize(), hMemory);
  14551 
  14552     m_Budget.RemoveAllocation(MemoryTypeIndexToHeapIndex(allocation->GetMemoryTypeIndex()), allocation->GetSize());
  14553     m_AllocationObjectAllocator.Free(allocation);
  14554 
  14555     VMA_DEBUG_LOG_FORMAT("    Freed DedicatedMemory MemoryTypeIndex=%" PRIu32, memTypeIndex);
  14556 }
  14557 
  14558 uint32_t VmaAllocator_T::CalculateGpuDefragmentationMemoryTypeBits() const
  14559 {
  14560     VkBufferCreateInfo dummyBufCreateInfo;
  14561     VmaFillGpuDefragmentationBufferCreateInfo(dummyBufCreateInfo);
  14562 
  14563     uint32_t memoryTypeBits = 0;
  14564 
  14565     // Create buffer.
  14566     VkBuffer buf = VK_NULL_HANDLE;
  14567     VkResult res = (*GetVulkanFunctions().vkCreateBuffer)(
  14568         m_hDevice, &dummyBufCreateInfo, GetAllocationCallbacks(), &buf);
  14569     if(res == VK_SUCCESS)
  14570     {
  14571         // Query for supported memory types.
  14572         VkMemoryRequirements memReq;
  14573         (*GetVulkanFunctions().vkGetBufferMemoryRequirements)(m_hDevice, buf, &memReq);
  14574         memoryTypeBits = memReq.memoryTypeBits;
  14575 
  14576         // Destroy buffer.
  14577         (*GetVulkanFunctions().vkDestroyBuffer)(m_hDevice, buf, GetAllocationCallbacks());
  14578     }
  14579 
  14580     return memoryTypeBits;
  14581 }
  14582 
  14583 uint32_t VmaAllocator_T::CalculateGlobalMemoryTypeBits() const
  14584 {
  14585     // Make sure memory information is already fetched.
  14586     VMA_ASSERT(GetMemoryTypeCount() > 0);
  14587 
  14588     uint32_t memoryTypeBits = UINT32_MAX;
  14589 
  14590     if(!m_UseAmdDeviceCoherentMemory)
  14591     {
  14592         // Exclude memory types that have VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD.
  14593         for(uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
  14594         {
  14595             if((m_MemProps.memoryTypes[memTypeIndex].propertyFlags & VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY) != 0)
  14596             {
  14597                 memoryTypeBits &= ~(1u << memTypeIndex);
  14598             }
  14599         }
  14600     }
  14601 
  14602     return memoryTypeBits;
  14603 }
  14604 
  14605 bool VmaAllocator_T::GetFlushOrInvalidateRange(
  14606     VmaAllocation allocation,
  14607     VkDeviceSize offset, VkDeviceSize size,
  14608     VkMappedMemoryRange& outRange) const
  14609 {
  14610     const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
  14611     if(size > 0 && IsMemoryTypeNonCoherent(memTypeIndex))
  14612     {
  14613         const VkDeviceSize nonCoherentAtomSize = m_PhysicalDeviceProperties.limits.nonCoherentAtomSize;
  14614         const VkDeviceSize allocationSize = allocation->GetSize();
  14615         VMA_ASSERT(offset <= allocationSize);
  14616 
  14617         outRange.sType = VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE;
  14618         outRange.pNext = VMA_NULL;
  14619         outRange.memory = allocation->GetMemory();
  14620 
  14621         switch(allocation->GetType())
  14622         {
  14623         case VmaAllocation_T::ALLOCATION_TYPE_DEDICATED:
  14624             outRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
  14625             if(size == VK_WHOLE_SIZE)
  14626             {
  14627                 outRange.size = allocationSize - outRange.offset;
  14628             }
  14629             else
  14630             {
  14631                 VMA_ASSERT(offset + size <= allocationSize);
  14632                 outRange.size = VMA_MIN(
  14633                     VmaAlignUp(size + (offset - outRange.offset), nonCoherentAtomSize),
  14634                     allocationSize - outRange.offset);
  14635             }
  14636             break;
  14637         case VmaAllocation_T::ALLOCATION_TYPE_BLOCK:
  14638         {
  14639             // 1. Still within this allocation.
  14640             outRange.offset = VmaAlignDown(offset, nonCoherentAtomSize);
  14641             if(size == VK_WHOLE_SIZE)
  14642             {
  14643                 size = allocationSize - offset;
  14644             }
  14645             else
  14646             {
  14647                 VMA_ASSERT(offset + size <= allocationSize);
  14648             }
  14649             outRange.size = VmaAlignUp(size + (offset - outRange.offset), nonCoherentAtomSize);
  14650 
  14651             // 2. Adjust to whole block.
  14652             const VkDeviceSize allocationOffset = allocation->GetOffset();
  14653             VMA_ASSERT(allocationOffset % nonCoherentAtomSize == 0);
  14654             const VkDeviceSize blockSize = allocation->GetBlock()->m_pMetadata->GetSize();
  14655             outRange.offset += allocationOffset;
  14656             outRange.size = VMA_MIN(outRange.size, blockSize - outRange.offset);
  14657 
  14658             break;
  14659         }
  14660         default:
  14661             VMA_ASSERT(0);
  14662         }
  14663         return true;
  14664     }
  14665     return false;
  14666 }
  14667 
  14668 #if VMA_MEMORY_BUDGET
  14669 void VmaAllocator_T::UpdateVulkanBudget()
  14670 {
  14671     VMA_ASSERT(m_UseExtMemoryBudget);
  14672 
  14673     VkPhysicalDeviceMemoryProperties2KHR memProps = { VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_MEMORY_PROPERTIES_2_KHR };
  14674 
  14675     VkPhysicalDeviceMemoryBudgetPropertiesEXT budgetProps = { VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_MEMORY_BUDGET_PROPERTIES_EXT };
  14676     VmaPnextChainPushFront(&memProps, &budgetProps);
  14677 
  14678     GetVulkanFunctions().vkGetPhysicalDeviceMemoryProperties2KHR(m_PhysicalDevice, &memProps);
  14679 
  14680     {
  14681         VmaMutexLockWrite lockWrite(m_Budget.m_BudgetMutex, m_UseMutex);
  14682 
  14683         for(uint32_t heapIndex = 0; heapIndex < GetMemoryHeapCount(); ++heapIndex)
  14684         {
  14685             m_Budget.m_VulkanUsage[heapIndex] = budgetProps.heapUsage[heapIndex];
  14686             m_Budget.m_VulkanBudget[heapIndex] = budgetProps.heapBudget[heapIndex];
  14687             m_Budget.m_BlockBytesAtBudgetFetch[heapIndex] = m_Budget.m_BlockBytes[heapIndex].load();
  14688 
  14689             // Some bugged drivers return the budget incorrectly, e.g. 0 or much bigger than heap size.
  14690             if(m_Budget.m_VulkanBudget[heapIndex] == 0)
  14691             {
  14692                 m_Budget.m_VulkanBudget[heapIndex] = m_MemProps.memoryHeaps[heapIndex].size * 8 / 10; // 80% heuristics.
  14693             }
  14694             else if(m_Budget.m_VulkanBudget[heapIndex] > m_MemProps.memoryHeaps[heapIndex].size)
  14695             {
  14696                 m_Budget.m_VulkanBudget[heapIndex] = m_MemProps.memoryHeaps[heapIndex].size;
  14697             }
  14698             if(m_Budget.m_VulkanUsage[heapIndex] == 0 && m_Budget.m_BlockBytesAtBudgetFetch[heapIndex] > 0)
  14699             {
  14700                 m_Budget.m_VulkanUsage[heapIndex] = m_Budget.m_BlockBytesAtBudgetFetch[heapIndex];
  14701             }
  14702         }
  14703         m_Budget.m_OperationsSinceBudgetFetch = 0;
  14704     }
  14705 }
  14706 #endif // VMA_MEMORY_BUDGET
  14707 
  14708 void VmaAllocator_T::FillAllocation(const VmaAllocation hAllocation, uint8_t pattern)
  14709 {
  14710     if(VMA_DEBUG_INITIALIZE_ALLOCATIONS &&
  14711         hAllocation->IsMappingAllowed() &&
  14712         (m_MemProps.memoryTypes[hAllocation->GetMemoryTypeIndex()].propertyFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) != 0)
  14713     {
  14714         void* pData = VMA_NULL;
  14715         VkResult res = Map(hAllocation, &pData);
  14716         if(res == VK_SUCCESS)
  14717         {
  14718             memset(pData, (int)pattern, (size_t)hAllocation->GetSize());
  14719             FlushOrInvalidateAllocation(hAllocation, 0, VK_WHOLE_SIZE, VMA_CACHE_FLUSH);
  14720             Unmap(hAllocation);
  14721         }
  14722         else
  14723         {
  14724             VMA_ASSERT(0 && "VMA_DEBUG_INITIALIZE_ALLOCATIONS is enabled, but couldn't map memory to fill allocation.");
  14725         }
  14726     }
  14727 }
  14728 
  14729 uint32_t VmaAllocator_T::GetGpuDefragmentationMemoryTypeBits()
  14730 {
  14731     uint32_t memoryTypeBits = m_GpuDefragmentationMemoryTypeBits.load();
  14732     if(memoryTypeBits == UINT32_MAX)
  14733     {
  14734         memoryTypeBits = CalculateGpuDefragmentationMemoryTypeBits();
  14735         m_GpuDefragmentationMemoryTypeBits.store(memoryTypeBits);
  14736     }
  14737     return memoryTypeBits;
  14738 }
  14739 
  14740 #if VMA_STATS_STRING_ENABLED
  14741 void VmaAllocator_T::PrintDetailedMap(VmaJsonWriter& json)
  14742 {
  14743     json.WriteString("DefaultPools");
  14744     json.BeginObject();
  14745     {
  14746         for (uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
  14747         {
  14748             VmaBlockVector* pBlockVector = m_pBlockVectors[memTypeIndex];
  14749             VmaDedicatedAllocationList& dedicatedAllocList = m_DedicatedAllocations[memTypeIndex];
  14750             if (pBlockVector != VMA_NULL)
  14751             {
  14752                 json.BeginString("Type ");
  14753                 json.ContinueString(memTypeIndex);
  14754                 json.EndString();
  14755                 json.BeginObject();
  14756                 {
  14757                     json.WriteString("PreferredBlockSize");
  14758                     json.WriteNumber(pBlockVector->GetPreferredBlockSize());
  14759 
  14760                     json.WriteString("Blocks");
  14761                     pBlockVector->PrintDetailedMap(json);
  14762 
  14763                     json.WriteString("DedicatedAllocations");
  14764                     dedicatedAllocList.BuildStatsString(json);
  14765                 }
  14766                 json.EndObject();
  14767             }
  14768         }
  14769     }
  14770     json.EndObject();
  14771 
  14772     json.WriteString("CustomPools");
  14773     json.BeginObject();
  14774     {
  14775         VmaMutexLockRead lock(m_PoolsMutex, m_UseMutex);
  14776         if (!m_Pools.IsEmpty())
  14777         {
  14778             for (uint32_t memTypeIndex = 0; memTypeIndex < GetMemoryTypeCount(); ++memTypeIndex)
  14779             {
  14780                 bool displayType = true;
  14781                 size_t index = 0;
  14782                 for (VmaPool pool = m_Pools.Front(); pool != VMA_NULL; pool = m_Pools.GetNext(pool))
  14783                 {
  14784                     VmaBlockVector& blockVector = pool->m_BlockVector;
  14785                     if (blockVector.GetMemoryTypeIndex() == memTypeIndex)
  14786                     {
  14787                         if (displayType)
  14788                         {
  14789                             json.BeginString("Type ");
  14790                             json.ContinueString(memTypeIndex);
  14791                             json.EndString();
  14792                             json.BeginArray();
  14793                             displayType = false;
  14794                         }
  14795 
  14796                         json.BeginObject();
  14797                         {
  14798                             json.WriteString("Name");
  14799                             json.BeginString();
  14800                             json.ContinueString((uint64_t)index++);
  14801                             if (pool->GetName())
  14802                             {
  14803                                 json.ContinueString(" - ");
  14804                                 json.ContinueString(pool->GetName());
  14805                             }
  14806                             json.EndString();
  14807 
  14808                             json.WriteString("PreferredBlockSize");
  14809                             json.WriteNumber(blockVector.GetPreferredBlockSize());
  14810 
  14811                             json.WriteString("Blocks");
  14812                             blockVector.PrintDetailedMap(json);
  14813 
  14814                             json.WriteString("DedicatedAllocations");
  14815                             pool->m_DedicatedAllocations.BuildStatsString(json);
  14816                         }
  14817                         json.EndObject();
  14818                     }
  14819                 }
  14820 
  14821                 if (!displayType)
  14822                     json.EndArray();
  14823             }
  14824         }
  14825     }
  14826     json.EndObject();
  14827 }
  14828 #endif // VMA_STATS_STRING_ENABLED
  14829 #endif // _VMA_ALLOCATOR_T_FUNCTIONS
  14830 
  14831 
  14832 #ifndef _VMA_PUBLIC_INTERFACE
  14833 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAllocator(
  14834     const VmaAllocatorCreateInfo* pCreateInfo,
  14835     VmaAllocator* pAllocator)
  14836 {
  14837     VMA_ASSERT(pCreateInfo && pAllocator);
  14838     VMA_ASSERT(pCreateInfo->vulkanApiVersion == 0 ||
  14839         (VK_VERSION_MAJOR(pCreateInfo->vulkanApiVersion) == 1 && VK_VERSION_MINOR(pCreateInfo->vulkanApiVersion) <= 3));
  14840     VMA_DEBUG_LOG("vmaCreateAllocator");
  14841     *pAllocator = vma_new(pCreateInfo->pAllocationCallbacks, VmaAllocator_T)(pCreateInfo);
  14842     VkResult result = (*pAllocator)->Init(pCreateInfo);
  14843     if(result < 0)
  14844     {
  14845         vma_delete(pCreateInfo->pAllocationCallbacks, *pAllocator);
  14846         *pAllocator = VK_NULL_HANDLE;
  14847     }
  14848     return result;
  14849 }
  14850 
  14851 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyAllocator(
  14852     VmaAllocator allocator)
  14853 {
  14854     if(allocator != VK_NULL_HANDLE)
  14855     {
  14856         VMA_DEBUG_LOG("vmaDestroyAllocator");
  14857         VkAllocationCallbacks allocationCallbacks = allocator->m_AllocationCallbacks; // Have to copy the callbacks when destroying.
  14858         vma_delete(&allocationCallbacks, allocator);
  14859     }
  14860 }
  14861 
  14862 VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocatorInfo(VmaAllocator allocator, VmaAllocatorInfo* pAllocatorInfo)
  14863 {
  14864     VMA_ASSERT(allocator && pAllocatorInfo);
  14865     pAllocatorInfo->instance = allocator->m_hInstance;
  14866     pAllocatorInfo->physicalDevice = allocator->GetPhysicalDevice();
  14867     pAllocatorInfo->device = allocator->m_hDevice;
  14868 }
  14869 
  14870 VMA_CALL_PRE void VMA_CALL_POST vmaGetPhysicalDeviceProperties(
  14871     VmaAllocator allocator,
  14872     const VkPhysicalDeviceProperties **ppPhysicalDeviceProperties)
  14873 {
  14874     VMA_ASSERT(allocator && ppPhysicalDeviceProperties);
  14875     *ppPhysicalDeviceProperties = &allocator->m_PhysicalDeviceProperties;
  14876 }
  14877 
  14878 VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryProperties(
  14879     VmaAllocator allocator,
  14880     const VkPhysicalDeviceMemoryProperties** ppPhysicalDeviceMemoryProperties)
  14881 {
  14882     VMA_ASSERT(allocator && ppPhysicalDeviceMemoryProperties);
  14883     *ppPhysicalDeviceMemoryProperties = &allocator->m_MemProps;
  14884 }
  14885 
  14886 VMA_CALL_PRE void VMA_CALL_POST vmaGetMemoryTypeProperties(
  14887     VmaAllocator allocator,
  14888     uint32_t memoryTypeIndex,
  14889     VkMemoryPropertyFlags* pFlags)
  14890 {
  14891     VMA_ASSERT(allocator && pFlags);
  14892     VMA_ASSERT(memoryTypeIndex < allocator->GetMemoryTypeCount());
  14893     *pFlags = allocator->m_MemProps.memoryTypes[memoryTypeIndex].propertyFlags;
  14894 }
  14895 
  14896 VMA_CALL_PRE void VMA_CALL_POST vmaSetCurrentFrameIndex(
  14897     VmaAllocator allocator,
  14898     uint32_t frameIndex)
  14899 {
  14900     VMA_ASSERT(allocator);
  14901 
  14902     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  14903 
  14904     allocator->SetCurrentFrameIndex(frameIndex);
  14905 }
  14906 
  14907 VMA_CALL_PRE void VMA_CALL_POST vmaCalculateStatistics(
  14908     VmaAllocator allocator,
  14909     VmaTotalStatistics* pStats)
  14910 {
  14911     VMA_ASSERT(allocator && pStats);
  14912     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  14913     allocator->CalculateStatistics(pStats);
  14914 }
  14915 
  14916 VMA_CALL_PRE void VMA_CALL_POST vmaGetHeapBudgets(
  14917     VmaAllocator allocator,
  14918     VmaBudget* pBudgets)
  14919 {
  14920     VMA_ASSERT(allocator && pBudgets);
  14921     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  14922     allocator->GetHeapBudgets(pBudgets, 0, allocator->GetMemoryHeapCount());
  14923 }
  14924 
  14925 #if VMA_STATS_STRING_ENABLED
  14926 
  14927 VMA_CALL_PRE void VMA_CALL_POST vmaBuildStatsString(
  14928     VmaAllocator allocator,
  14929     char** ppStatsString,
  14930     VkBool32 detailedMap)
  14931 {
  14932     VMA_ASSERT(allocator && ppStatsString);
  14933     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  14934 
  14935     VmaStringBuilder sb(allocator->GetAllocationCallbacks());
  14936     {
  14937         VmaBudget budgets[VK_MAX_MEMORY_HEAPS];
  14938         allocator->GetHeapBudgets(budgets, 0, allocator->GetMemoryHeapCount());
  14939 
  14940         VmaTotalStatistics stats;
  14941         allocator->CalculateStatistics(&stats);
  14942 
  14943         VmaJsonWriter json(allocator->GetAllocationCallbacks(), sb);
  14944         json.BeginObject();
  14945         {
  14946             json.WriteString("General");
  14947             json.BeginObject();
  14948             {
  14949                 const VkPhysicalDeviceProperties& deviceProperties = allocator->m_PhysicalDeviceProperties;
  14950                 const VkPhysicalDeviceMemoryProperties& memoryProperties = allocator->m_MemProps;
  14951 
  14952                 json.WriteString("API");
  14953                 json.WriteString("Vulkan");
  14954 
  14955                 json.WriteString("apiVersion");
  14956                 json.BeginString();
  14957                 json.ContinueString(VK_VERSION_MAJOR(deviceProperties.apiVersion));
  14958                 json.ContinueString(".");
  14959                 json.ContinueString(VK_VERSION_MINOR(deviceProperties.apiVersion));
  14960                 json.ContinueString(".");
  14961                 json.ContinueString(VK_VERSION_PATCH(deviceProperties.apiVersion));
  14962                 json.EndString();
  14963 
  14964                 json.WriteString("GPU");
  14965                 json.WriteString(deviceProperties.deviceName);
  14966                 json.WriteString("deviceType");
  14967                 json.WriteNumber(static_cast<uint32_t>(deviceProperties.deviceType));
  14968 
  14969                 json.WriteString("maxMemoryAllocationCount");
  14970                 json.WriteNumber(deviceProperties.limits.maxMemoryAllocationCount);
  14971                 json.WriteString("bufferImageGranularity");
  14972                 json.WriteNumber(deviceProperties.limits.bufferImageGranularity);
  14973                 json.WriteString("nonCoherentAtomSize");
  14974                 json.WriteNumber(deviceProperties.limits.nonCoherentAtomSize);
  14975 
  14976                 json.WriteString("memoryHeapCount");
  14977                 json.WriteNumber(memoryProperties.memoryHeapCount);
  14978                 json.WriteString("memoryTypeCount");
  14979                 json.WriteNumber(memoryProperties.memoryTypeCount);
  14980             }
  14981             json.EndObject();
  14982         }
  14983         {
  14984             json.WriteString("Total");
  14985             VmaPrintDetailedStatistics(json, stats.total);
  14986         }
  14987         {
  14988             json.WriteString("MemoryInfo");
  14989             json.BeginObject();
  14990             {
  14991                 for (uint32_t heapIndex = 0; heapIndex < allocator->GetMemoryHeapCount(); ++heapIndex)
  14992                 {
  14993                     json.BeginString("Heap ");
  14994                     json.ContinueString(heapIndex);
  14995                     json.EndString();
  14996                     json.BeginObject();
  14997                     {
  14998                         const VkMemoryHeap& heapInfo = allocator->m_MemProps.memoryHeaps[heapIndex];
  14999                         json.WriteString("Flags");
  15000                         json.BeginArray(true);
  15001                         {
  15002                             if (heapInfo.flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT)
  15003                                 json.WriteString("DEVICE_LOCAL");
  15004                         #if VMA_VULKAN_VERSION >= 1001000
  15005                             if (heapInfo.flags & VK_MEMORY_HEAP_MULTI_INSTANCE_BIT)
  15006                                 json.WriteString("MULTI_INSTANCE");
  15007                         #endif
  15008 
  15009                             VkMemoryHeapFlags flags = heapInfo.flags &
  15010                                 ~(VK_MEMORY_HEAP_DEVICE_LOCAL_BIT
  15011                         #if VMA_VULKAN_VERSION >= 1001000
  15012                                     | VK_MEMORY_HEAP_MULTI_INSTANCE_BIT
  15013                         #endif
  15014                                     );
  15015                             if (flags != 0)
  15016                                 json.WriteNumber(flags);
  15017                         }
  15018                         json.EndArray();
  15019 
  15020                         json.WriteString("Size");
  15021                         json.WriteNumber(heapInfo.size);
  15022 
  15023                         json.WriteString("Budget");
  15024                         json.BeginObject();
  15025                         {
  15026                             json.WriteString("BudgetBytes");
  15027                             json.WriteNumber(budgets[heapIndex].budget);
  15028                             json.WriteString("UsageBytes");
  15029                             json.WriteNumber(budgets[heapIndex].usage);
  15030                         }
  15031                         json.EndObject();
  15032 
  15033                         json.WriteString("Stats");
  15034                         VmaPrintDetailedStatistics(json, stats.memoryHeap[heapIndex]);
  15035 
  15036                         json.WriteString("MemoryPools");
  15037                         json.BeginObject();
  15038                         {
  15039                             for (uint32_t typeIndex = 0; typeIndex < allocator->GetMemoryTypeCount(); ++typeIndex)
  15040                             {
  15041                                 if (allocator->MemoryTypeIndexToHeapIndex(typeIndex) == heapIndex)
  15042                                 {
  15043                                     json.BeginString("Type ");
  15044                                     json.ContinueString(typeIndex);
  15045                                     json.EndString();
  15046                                     json.BeginObject();
  15047                                     {
  15048                                         json.WriteString("Flags");
  15049                                         json.BeginArray(true);
  15050                                         {
  15051                                             VkMemoryPropertyFlags flags = allocator->m_MemProps.memoryTypes[typeIndex].propertyFlags;
  15052                                             if (flags & VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT)
  15053                                                 json.WriteString("DEVICE_LOCAL");
  15054                                             if (flags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT)
  15055                                                 json.WriteString("HOST_VISIBLE");
  15056                                             if (flags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT)
  15057                                                 json.WriteString("HOST_COHERENT");
  15058                                             if (flags & VK_MEMORY_PROPERTY_HOST_CACHED_BIT)
  15059                                                 json.WriteString("HOST_CACHED");
  15060                                             if (flags & VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT)
  15061                                                 json.WriteString("LAZILY_ALLOCATED");
  15062                                         #if VMA_VULKAN_VERSION >= 1001000
  15063                                             if (flags & VK_MEMORY_PROPERTY_PROTECTED_BIT)
  15064                                                 json.WriteString("PROTECTED");
  15065                                         #endif
  15066                                         #if VK_AMD_device_coherent_memory
  15067                                             if (flags & VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY)
  15068                                                 json.WriteString("DEVICE_COHERENT_AMD");
  15069                                             if (flags & VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY)
  15070                                                 json.WriteString("DEVICE_UNCACHED_AMD");
  15071                                         #endif
  15072 
  15073                                             flags &= ~(VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT
  15074                                         #if VMA_VULKAN_VERSION >= 1001000
  15075                                                 | VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT
  15076                                         #endif
  15077                                         #if VK_AMD_device_coherent_memory
  15078                                                 | VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD_COPY
  15079                                                 | VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD_COPY
  15080                                         #endif
  15081                                                 | VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT
  15082                                                 | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT
  15083                                                 | VK_MEMORY_PROPERTY_HOST_CACHED_BIT);
  15084                                             if (flags != 0)
  15085                                                 json.WriteNumber(flags);
  15086                                         }
  15087                                         json.EndArray();
  15088 
  15089                                         json.WriteString("Stats");
  15090                                         VmaPrintDetailedStatistics(json, stats.memoryType[typeIndex]);
  15091                                     }
  15092                                     json.EndObject();
  15093                                 }
  15094                             }
  15095 
  15096                         }
  15097                         json.EndObject();
  15098                     }
  15099                     json.EndObject();
  15100                 }
  15101             }
  15102             json.EndObject();
  15103         }
  15104 
  15105         if (detailedMap == VK_TRUE)
  15106             allocator->PrintDetailedMap(json);
  15107 
  15108         json.EndObject();
  15109     }
  15110 
  15111     *ppStatsString = VmaCreateStringCopy(allocator->GetAllocationCallbacks(), sb.GetData(), sb.GetLength());
  15112 }
  15113 
  15114 VMA_CALL_PRE void VMA_CALL_POST vmaFreeStatsString(
  15115     VmaAllocator allocator,
  15116     char* pStatsString)
  15117 {
  15118     if(pStatsString != VMA_NULL)
  15119     {
  15120         VMA_ASSERT(allocator);
  15121         VmaFreeString(allocator->GetAllocationCallbacks(), pStatsString);
  15122     }
  15123 }
  15124 
  15125 #endif // VMA_STATS_STRING_ENABLED
  15126 
  15127 /*
  15128 This function is not protected by any mutex because it just reads immutable data.
  15129 */
  15130 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndex(
  15131     VmaAllocator allocator,
  15132     uint32_t memoryTypeBits,
  15133     const VmaAllocationCreateInfo* pAllocationCreateInfo,
  15134     uint32_t* pMemoryTypeIndex)
  15135 {
  15136     VMA_ASSERT(allocator != VK_NULL_HANDLE);
  15137     VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
  15138     VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
  15139 
  15140     return allocator->FindMemoryTypeIndex(memoryTypeBits, pAllocationCreateInfo, VmaBufferImageUsage::UNKNOWN, pMemoryTypeIndex);
  15141 }
  15142 
  15143 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForBufferInfo(
  15144     VmaAllocator allocator,
  15145     const VkBufferCreateInfo* pBufferCreateInfo,
  15146     const VmaAllocationCreateInfo* pAllocationCreateInfo,
  15147     uint32_t* pMemoryTypeIndex)
  15148 {
  15149     VMA_ASSERT(allocator != VK_NULL_HANDLE);
  15150     VMA_ASSERT(pBufferCreateInfo != VMA_NULL);
  15151     VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
  15152     VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
  15153 
  15154     const VkDevice hDev = allocator->m_hDevice;
  15155     const VmaVulkanFunctions* funcs = &allocator->GetVulkanFunctions();
  15156     VkResult res;
  15157 
  15158 #if VMA_KHR_MAINTENANCE4 || VMA_VULKAN_VERSION >= 1003000
  15159     if(funcs->vkGetDeviceBufferMemoryRequirements)
  15160     {
  15161         // Can query straight from VkBufferCreateInfo :)
  15162         VkDeviceBufferMemoryRequirementsKHR devBufMemReq = {VK_STRUCTURE_TYPE_DEVICE_BUFFER_MEMORY_REQUIREMENTS_KHR};
  15163         devBufMemReq.pCreateInfo = pBufferCreateInfo;
  15164 
  15165         VkMemoryRequirements2 memReq = {VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2};
  15166         (*funcs->vkGetDeviceBufferMemoryRequirements)(hDev, &devBufMemReq, &memReq);
  15167 
  15168         res = allocator->FindMemoryTypeIndex(
  15169             memReq.memoryRequirements.memoryTypeBits, pAllocationCreateInfo,
  15170             VmaBufferImageUsage(*pBufferCreateInfo, allocator->m_UseKhrMaintenance5), pMemoryTypeIndex);
  15171     }
  15172     else
  15173 #endif // VMA_KHR_MAINTENANCE4 || VMA_VULKAN_VERSION >= 1003000
  15174     {
  15175         // Must create a dummy buffer to query :(
  15176         VkBuffer hBuffer = VK_NULL_HANDLE;
  15177         res = funcs->vkCreateBuffer(
  15178             hDev, pBufferCreateInfo, allocator->GetAllocationCallbacks(), &hBuffer);
  15179         if(res == VK_SUCCESS)
  15180         {
  15181             VkMemoryRequirements memReq = {};
  15182             funcs->vkGetBufferMemoryRequirements(hDev, hBuffer, &memReq);
  15183 
  15184             res = allocator->FindMemoryTypeIndex(
  15185                 memReq.memoryTypeBits, pAllocationCreateInfo,
  15186                 VmaBufferImageUsage(*pBufferCreateInfo, allocator->m_UseKhrMaintenance5), pMemoryTypeIndex);
  15187 
  15188             funcs->vkDestroyBuffer(
  15189                 hDev, hBuffer, allocator->GetAllocationCallbacks());
  15190         }
  15191     }
  15192     return res;
  15193 }
  15194 
  15195 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFindMemoryTypeIndexForImageInfo(
  15196     VmaAllocator allocator,
  15197     const VkImageCreateInfo* pImageCreateInfo,
  15198     const VmaAllocationCreateInfo* pAllocationCreateInfo,
  15199     uint32_t* pMemoryTypeIndex)
  15200 {
  15201     VMA_ASSERT(allocator != VK_NULL_HANDLE);
  15202     VMA_ASSERT(pImageCreateInfo != VMA_NULL);
  15203     VMA_ASSERT(pAllocationCreateInfo != VMA_NULL);
  15204     VMA_ASSERT(pMemoryTypeIndex != VMA_NULL);
  15205 
  15206     const VkDevice hDev = allocator->m_hDevice;
  15207     const VmaVulkanFunctions* funcs = &allocator->GetVulkanFunctions();
  15208     VkResult res;
  15209 
  15210 #if VMA_KHR_MAINTENANCE4 || VMA_VULKAN_VERSION >= 1003000
  15211     if(funcs->vkGetDeviceImageMemoryRequirements)
  15212     {
  15213         // Can query straight from VkImageCreateInfo :)
  15214         VkDeviceImageMemoryRequirementsKHR devImgMemReq = {VK_STRUCTURE_TYPE_DEVICE_IMAGE_MEMORY_REQUIREMENTS_KHR};
  15215         devImgMemReq.pCreateInfo = pImageCreateInfo;
  15216         VMA_ASSERT(pImageCreateInfo->tiling != VK_IMAGE_TILING_DRM_FORMAT_MODIFIER_EXT_COPY && (pImageCreateInfo->flags & VK_IMAGE_CREATE_DISJOINT_BIT_COPY) == 0 &&
  15217             "Cannot use this VkImageCreateInfo with vmaFindMemoryTypeIndexForImageInfo as I don't know what to pass as VkDeviceImageMemoryRequirements::planeAspect.");
  15218 
  15219         VkMemoryRequirements2 memReq = {VK_STRUCTURE_TYPE_MEMORY_REQUIREMENTS_2};
  15220         (*funcs->vkGetDeviceImageMemoryRequirements)(hDev, &devImgMemReq, &memReq);
  15221 
  15222         res = allocator->FindMemoryTypeIndex(
  15223             memReq.memoryRequirements.memoryTypeBits, pAllocationCreateInfo,
  15224             VmaBufferImageUsage(*pImageCreateInfo), pMemoryTypeIndex);
  15225     }
  15226     else
  15227 #endif // VMA_KHR_MAINTENANCE4 || VMA_VULKAN_VERSION >= 1003000
  15228     {
  15229         // Must create a dummy image to query :(
  15230         VkImage hImage = VK_NULL_HANDLE;
  15231         res = funcs->vkCreateImage(
  15232             hDev, pImageCreateInfo, allocator->GetAllocationCallbacks(), &hImage);
  15233         if(res == VK_SUCCESS)
  15234         {
  15235             VkMemoryRequirements memReq = {};
  15236             funcs->vkGetImageMemoryRequirements(hDev, hImage, &memReq);
  15237 
  15238             res = allocator->FindMemoryTypeIndex(
  15239                 memReq.memoryTypeBits, pAllocationCreateInfo,
  15240                 VmaBufferImageUsage(*pImageCreateInfo), pMemoryTypeIndex);
  15241 
  15242             funcs->vkDestroyImage(
  15243                 hDev, hImage, allocator->GetAllocationCallbacks());
  15244         }
  15245     }
  15246     return res;
  15247 }
  15248 
  15249 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreatePool(
  15250     VmaAllocator allocator,
  15251     const VmaPoolCreateInfo* pCreateInfo,
  15252     VmaPool* pPool)
  15253 {
  15254     VMA_ASSERT(allocator && pCreateInfo && pPool);
  15255 
  15256     VMA_DEBUG_LOG("vmaCreatePool");
  15257 
  15258     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15259 
  15260     return allocator->CreatePool(pCreateInfo, pPool);
  15261 }
  15262 
  15263 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyPool(
  15264     VmaAllocator allocator,
  15265     VmaPool pool)
  15266 {
  15267     VMA_ASSERT(allocator);
  15268 
  15269     if(pool == VK_NULL_HANDLE)
  15270     {
  15271         return;
  15272     }
  15273 
  15274     VMA_DEBUG_LOG("vmaDestroyPool");
  15275 
  15276     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15277 
  15278     allocator->DestroyPool(pool);
  15279 }
  15280 
  15281 VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolStatistics(
  15282     VmaAllocator allocator,
  15283     VmaPool pool,
  15284     VmaStatistics* pPoolStats)
  15285 {
  15286     VMA_ASSERT(allocator && pool && pPoolStats);
  15287 
  15288     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15289 
  15290     allocator->GetPoolStatistics(pool, pPoolStats);
  15291 }
  15292 
  15293 VMA_CALL_PRE void VMA_CALL_POST vmaCalculatePoolStatistics(
  15294     VmaAllocator allocator,
  15295     VmaPool pool,
  15296     VmaDetailedStatistics* pPoolStats)
  15297 {
  15298     VMA_ASSERT(allocator && pool && pPoolStats);
  15299 
  15300     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15301 
  15302     allocator->CalculatePoolStatistics(pool, pPoolStats);
  15303 }
  15304 
  15305 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckPoolCorruption(VmaAllocator allocator, VmaPool pool)
  15306 {
  15307     VMA_ASSERT(allocator && pool);
  15308 
  15309     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15310 
  15311     VMA_DEBUG_LOG("vmaCheckPoolCorruption");
  15312 
  15313     return allocator->CheckPoolCorruption(pool);
  15314 }
  15315 
  15316 VMA_CALL_PRE void VMA_CALL_POST vmaGetPoolName(
  15317     VmaAllocator allocator,
  15318     VmaPool pool,
  15319     const char** ppName)
  15320 {
  15321     VMA_ASSERT(allocator && pool && ppName);
  15322 
  15323     VMA_DEBUG_LOG("vmaGetPoolName");
  15324 
  15325     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15326 
  15327     *ppName = pool->GetName();
  15328 }
  15329 
  15330 VMA_CALL_PRE void VMA_CALL_POST vmaSetPoolName(
  15331     VmaAllocator allocator,
  15332     VmaPool pool,
  15333     const char* pName)
  15334 {
  15335     VMA_ASSERT(allocator && pool);
  15336 
  15337     VMA_DEBUG_LOG("vmaSetPoolName");
  15338 
  15339     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15340 
  15341     pool->SetName(pName);
  15342 }
  15343 
  15344 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemory(
  15345     VmaAllocator allocator,
  15346     const VkMemoryRequirements* pVkMemoryRequirements,
  15347     const VmaAllocationCreateInfo* pCreateInfo,
  15348     VmaAllocation* pAllocation,
  15349     VmaAllocationInfo* pAllocationInfo)
  15350 {
  15351     VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocation);
  15352 
  15353     VMA_DEBUG_LOG("vmaAllocateMemory");
  15354 
  15355     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15356 
  15357     VkResult result = allocator->AllocateMemory(
  15358         *pVkMemoryRequirements,
  15359         false, // requiresDedicatedAllocation
  15360         false, // prefersDedicatedAllocation
  15361         VK_NULL_HANDLE, // dedicatedBuffer
  15362         VK_NULL_HANDLE, // dedicatedImage
  15363         VmaBufferImageUsage::UNKNOWN, // dedicatedBufferImageUsage
  15364         *pCreateInfo,
  15365         VMA_SUBALLOCATION_TYPE_UNKNOWN,
  15366         1, // allocationCount
  15367         pAllocation);
  15368 
  15369     if(pAllocationInfo != VMA_NULL && result == VK_SUCCESS)
  15370     {
  15371         allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
  15372     }
  15373 
  15374     return result;
  15375 }
  15376 
  15377 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryPages(
  15378     VmaAllocator allocator,
  15379     const VkMemoryRequirements* pVkMemoryRequirements,
  15380     const VmaAllocationCreateInfo* pCreateInfo,
  15381     size_t allocationCount,
  15382     VmaAllocation* pAllocations,
  15383     VmaAllocationInfo* pAllocationInfo)
  15384 {
  15385     if(allocationCount == 0)
  15386     {
  15387         return VK_SUCCESS;
  15388     }
  15389 
  15390     VMA_ASSERT(allocator && pVkMemoryRequirements && pCreateInfo && pAllocations);
  15391 
  15392     VMA_DEBUG_LOG("vmaAllocateMemoryPages");
  15393 
  15394     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15395 
  15396     VkResult result = allocator->AllocateMemory(
  15397         *pVkMemoryRequirements,
  15398         false, // requiresDedicatedAllocation
  15399         false, // prefersDedicatedAllocation
  15400         VK_NULL_HANDLE, // dedicatedBuffer
  15401         VK_NULL_HANDLE, // dedicatedImage
  15402         VmaBufferImageUsage::UNKNOWN, // dedicatedBufferImageUsage
  15403         *pCreateInfo,
  15404         VMA_SUBALLOCATION_TYPE_UNKNOWN,
  15405         allocationCount,
  15406         pAllocations);
  15407 
  15408     if(pAllocationInfo != VMA_NULL && result == VK_SUCCESS)
  15409     {
  15410         for(size_t i = 0; i < allocationCount; ++i)
  15411         {
  15412             allocator->GetAllocationInfo(pAllocations[i], pAllocationInfo + i);
  15413         }
  15414     }
  15415 
  15416     return result;
  15417 }
  15418 
  15419 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForBuffer(
  15420     VmaAllocator allocator,
  15421     VkBuffer buffer,
  15422     const VmaAllocationCreateInfo* pCreateInfo,
  15423     VmaAllocation* pAllocation,
  15424     VmaAllocationInfo* pAllocationInfo)
  15425 {
  15426     VMA_ASSERT(allocator && buffer != VK_NULL_HANDLE && pCreateInfo && pAllocation);
  15427 
  15428     VMA_DEBUG_LOG("vmaAllocateMemoryForBuffer");
  15429 
  15430     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15431 
  15432     VkMemoryRequirements vkMemReq = {};
  15433     bool requiresDedicatedAllocation = false;
  15434     bool prefersDedicatedAllocation = false;
  15435     allocator->GetBufferMemoryRequirements(buffer, vkMemReq,
  15436         requiresDedicatedAllocation,
  15437         prefersDedicatedAllocation);
  15438 
  15439     VkResult result = allocator->AllocateMemory(
  15440         vkMemReq,
  15441         requiresDedicatedAllocation,
  15442         prefersDedicatedAllocation,
  15443         buffer, // dedicatedBuffer
  15444         VK_NULL_HANDLE, // dedicatedImage
  15445         VmaBufferImageUsage::UNKNOWN, // dedicatedBufferImageUsage
  15446         *pCreateInfo,
  15447         VMA_SUBALLOCATION_TYPE_BUFFER,
  15448         1, // allocationCount
  15449         pAllocation);
  15450 
  15451     if(pAllocationInfo && result == VK_SUCCESS)
  15452     {
  15453         allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
  15454     }
  15455 
  15456     return result;
  15457 }
  15458 
  15459 VMA_CALL_PRE VkResult VMA_CALL_POST vmaAllocateMemoryForImage(
  15460     VmaAllocator allocator,
  15461     VkImage image,
  15462     const VmaAllocationCreateInfo* pCreateInfo,
  15463     VmaAllocation* pAllocation,
  15464     VmaAllocationInfo* pAllocationInfo)
  15465 {
  15466     VMA_ASSERT(allocator && image != VK_NULL_HANDLE && pCreateInfo && pAllocation);
  15467 
  15468     VMA_DEBUG_LOG("vmaAllocateMemoryForImage");
  15469 
  15470     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15471 
  15472     VkMemoryRequirements vkMemReq = {};
  15473     bool requiresDedicatedAllocation = false;
  15474     bool prefersDedicatedAllocation  = false;
  15475     allocator->GetImageMemoryRequirements(image, vkMemReq,
  15476         requiresDedicatedAllocation, prefersDedicatedAllocation);
  15477 
  15478     VkResult result = allocator->AllocateMemory(
  15479         vkMemReq,
  15480         requiresDedicatedAllocation,
  15481         prefersDedicatedAllocation,
  15482         VK_NULL_HANDLE, // dedicatedBuffer
  15483         image, // dedicatedImage
  15484         VmaBufferImageUsage::UNKNOWN, // dedicatedBufferImageUsage
  15485         *pCreateInfo,
  15486         VMA_SUBALLOCATION_TYPE_IMAGE_UNKNOWN,
  15487         1, // allocationCount
  15488         pAllocation);
  15489 
  15490     if(pAllocationInfo && result == VK_SUCCESS)
  15491     {
  15492         allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
  15493     }
  15494 
  15495     return result;
  15496 }
  15497 
  15498 VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemory(
  15499     VmaAllocator allocator,
  15500     VmaAllocation allocation)
  15501 {
  15502     VMA_ASSERT(allocator);
  15503 
  15504     if(allocation == VK_NULL_HANDLE)
  15505     {
  15506         return;
  15507     }
  15508 
  15509     VMA_DEBUG_LOG("vmaFreeMemory");
  15510 
  15511     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15512 
  15513     allocator->FreeMemory(
  15514         1, // allocationCount
  15515         &allocation);
  15516 }
  15517 
  15518 VMA_CALL_PRE void VMA_CALL_POST vmaFreeMemoryPages(
  15519     VmaAllocator allocator,
  15520     size_t allocationCount,
  15521     const VmaAllocation* pAllocations)
  15522 {
  15523     if(allocationCount == 0)
  15524     {
  15525         return;
  15526     }
  15527 
  15528     VMA_ASSERT(allocator);
  15529 
  15530     VMA_DEBUG_LOG("vmaFreeMemoryPages");
  15531 
  15532     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15533 
  15534     allocator->FreeMemory(allocationCount, pAllocations);
  15535 }
  15536 
  15537 VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationInfo(
  15538     VmaAllocator allocator,
  15539     VmaAllocation allocation,
  15540     VmaAllocationInfo* pAllocationInfo)
  15541 {
  15542     VMA_ASSERT(allocator && allocation && pAllocationInfo);
  15543 
  15544     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15545 
  15546     allocator->GetAllocationInfo(allocation, pAllocationInfo);
  15547 }
  15548 
  15549 VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationInfo2(
  15550     VmaAllocator allocator,
  15551     VmaAllocation allocation,
  15552     VmaAllocationInfo2* pAllocationInfo)
  15553 {
  15554     VMA_ASSERT(allocator && allocation && pAllocationInfo);
  15555 
  15556     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15557 
  15558     allocator->GetAllocationInfo2(allocation, pAllocationInfo);
  15559 }
  15560 
  15561 VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationUserData(
  15562     VmaAllocator allocator,
  15563     VmaAllocation allocation,
  15564     void* pUserData)
  15565 {
  15566     VMA_ASSERT(allocator && allocation);
  15567 
  15568     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15569 
  15570     allocation->SetUserData(allocator, pUserData);
  15571 }
  15572 
  15573 VMA_CALL_PRE void VMA_CALL_POST vmaSetAllocationName(
  15574     VmaAllocator VMA_NOT_NULL allocator,
  15575     VmaAllocation VMA_NOT_NULL allocation,
  15576     const char* VMA_NULLABLE pName)
  15577 {
  15578     allocation->SetName(allocator, pName);
  15579 }
  15580 
  15581 VMA_CALL_PRE void VMA_CALL_POST vmaGetAllocationMemoryProperties(
  15582     VmaAllocator VMA_NOT_NULL allocator,
  15583     VmaAllocation VMA_NOT_NULL allocation,
  15584     VkMemoryPropertyFlags* VMA_NOT_NULL pFlags)
  15585 {
  15586     VMA_ASSERT(allocator && allocation && pFlags);
  15587     const uint32_t memTypeIndex = allocation->GetMemoryTypeIndex();
  15588     *pFlags = allocator->m_MemProps.memoryTypes[memTypeIndex].propertyFlags;
  15589 }
  15590 
  15591 VMA_CALL_PRE VkResult VMA_CALL_POST vmaMapMemory(
  15592     VmaAllocator allocator,
  15593     VmaAllocation allocation,
  15594     void** ppData)
  15595 {
  15596     VMA_ASSERT(allocator && allocation && ppData);
  15597 
  15598     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15599 
  15600     return allocator->Map(allocation, ppData);
  15601 }
  15602 
  15603 VMA_CALL_PRE void VMA_CALL_POST vmaUnmapMemory(
  15604     VmaAllocator allocator,
  15605     VmaAllocation allocation)
  15606 {
  15607     VMA_ASSERT(allocator && allocation);
  15608 
  15609     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15610 
  15611     allocator->Unmap(allocation);
  15612 }
  15613 
  15614 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocation(
  15615     VmaAllocator allocator,
  15616     VmaAllocation allocation,
  15617     VkDeviceSize offset,
  15618     VkDeviceSize size)
  15619 {
  15620     VMA_ASSERT(allocator && allocation);
  15621 
  15622     VMA_DEBUG_LOG("vmaFlushAllocation");
  15623 
  15624     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15625 
  15626     return allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_FLUSH);
  15627 }
  15628 
  15629 VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocation(
  15630     VmaAllocator allocator,
  15631     VmaAllocation allocation,
  15632     VkDeviceSize offset,
  15633     VkDeviceSize size)
  15634 {
  15635     VMA_ASSERT(allocator && allocation);
  15636 
  15637     VMA_DEBUG_LOG("vmaInvalidateAllocation");
  15638 
  15639     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15640 
  15641     return allocator->FlushOrInvalidateAllocation(allocation, offset, size, VMA_CACHE_INVALIDATE);
  15642 }
  15643 
  15644 VMA_CALL_PRE VkResult VMA_CALL_POST vmaFlushAllocations(
  15645     VmaAllocator allocator,
  15646     uint32_t allocationCount,
  15647     const VmaAllocation* allocations,
  15648     const VkDeviceSize* offsets,
  15649     const VkDeviceSize* sizes)
  15650 {
  15651     VMA_ASSERT(allocator);
  15652 
  15653     if(allocationCount == 0)
  15654     {
  15655         return VK_SUCCESS;
  15656     }
  15657 
  15658     VMA_ASSERT(allocations);
  15659 
  15660     VMA_DEBUG_LOG("vmaFlushAllocations");
  15661 
  15662     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15663 
  15664     return allocator->FlushOrInvalidateAllocations(allocationCount, allocations, offsets, sizes, VMA_CACHE_FLUSH);
  15665 }
  15666 
  15667 VMA_CALL_PRE VkResult VMA_CALL_POST vmaInvalidateAllocations(
  15668     VmaAllocator allocator,
  15669     uint32_t allocationCount,
  15670     const VmaAllocation* allocations,
  15671     const VkDeviceSize* offsets,
  15672     const VkDeviceSize* sizes)
  15673 {
  15674     VMA_ASSERT(allocator);
  15675 
  15676     if(allocationCount == 0)
  15677     {
  15678         return VK_SUCCESS;
  15679     }
  15680 
  15681     VMA_ASSERT(allocations);
  15682 
  15683     VMA_DEBUG_LOG("vmaInvalidateAllocations");
  15684 
  15685     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15686 
  15687     return allocator->FlushOrInvalidateAllocations(allocationCount, allocations, offsets, sizes, VMA_CACHE_INVALIDATE);
  15688 }
  15689 
  15690 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCopyMemoryToAllocation(
  15691     VmaAllocator allocator,
  15692     const void* pSrcHostPointer,
  15693     VmaAllocation dstAllocation,
  15694     VkDeviceSize dstAllocationLocalOffset,
  15695     VkDeviceSize size)
  15696 {
  15697     VMA_ASSERT(allocator && pSrcHostPointer && dstAllocation);
  15698 
  15699     if(size == 0)
  15700     {
  15701         return VK_SUCCESS;
  15702     }
  15703 
  15704     VMA_DEBUG_LOG("vmaCopyMemoryToAllocation");
  15705 
  15706     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15707 
  15708     return allocator->CopyMemoryToAllocation(pSrcHostPointer, dstAllocation, dstAllocationLocalOffset, size);
  15709 }
  15710 
  15711 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCopyAllocationToMemory(
  15712     VmaAllocator allocator,
  15713     VmaAllocation srcAllocation,
  15714     VkDeviceSize srcAllocationLocalOffset,
  15715     void* pDstHostPointer,
  15716     VkDeviceSize size)
  15717 {
  15718     VMA_ASSERT(allocator && srcAllocation && pDstHostPointer);
  15719 
  15720     if(size == 0)
  15721     {
  15722         return VK_SUCCESS;
  15723     }
  15724 
  15725     VMA_DEBUG_LOG("vmaCopyAllocationToMemory");
  15726 
  15727     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15728 
  15729     return allocator->CopyAllocationToMemory(srcAllocation, srcAllocationLocalOffset, pDstHostPointer, size);
  15730 }
  15731 
  15732 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCheckCorruption(
  15733     VmaAllocator allocator,
  15734     uint32_t memoryTypeBits)
  15735 {
  15736     VMA_ASSERT(allocator);
  15737 
  15738     VMA_DEBUG_LOG("vmaCheckCorruption");
  15739 
  15740     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15741 
  15742     return allocator->CheckCorruption(memoryTypeBits);
  15743 }
  15744 
  15745 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentation(
  15746     VmaAllocator allocator,
  15747     const VmaDefragmentationInfo* pInfo,
  15748     VmaDefragmentationContext* pContext)
  15749 {
  15750     VMA_ASSERT(allocator && pInfo && pContext);
  15751 
  15752     VMA_DEBUG_LOG("vmaBeginDefragmentation");
  15753 
  15754     if (pInfo->pool != VMA_NULL)
  15755     {
  15756         // Check if run on supported algorithms
  15757         if (pInfo->pool->m_BlockVector.GetAlgorithm() & VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT)
  15758             return VK_ERROR_FEATURE_NOT_PRESENT;
  15759     }
  15760 
  15761     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15762 
  15763     *pContext = vma_new(allocator, VmaDefragmentationContext_T)(allocator, *pInfo);
  15764     return VK_SUCCESS;
  15765 }
  15766 
  15767 VMA_CALL_PRE void VMA_CALL_POST vmaEndDefragmentation(
  15768     VmaAllocator allocator,
  15769     VmaDefragmentationContext context,
  15770     VmaDefragmentationStats* pStats)
  15771 {
  15772     VMA_ASSERT(allocator && context);
  15773 
  15774     VMA_DEBUG_LOG("vmaEndDefragmentation");
  15775 
  15776     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15777 
  15778     if (pStats)
  15779         context->GetStats(*pStats);
  15780     vma_delete(allocator, context);
  15781 }
  15782 
  15783 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBeginDefragmentationPass(
  15784     VmaAllocator VMA_NOT_NULL allocator,
  15785     VmaDefragmentationContext VMA_NOT_NULL context,
  15786     VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo)
  15787 {
  15788     VMA_ASSERT(context && pPassInfo);
  15789 
  15790     VMA_DEBUG_LOG("vmaBeginDefragmentationPass");
  15791 
  15792     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15793 
  15794     return context->DefragmentPassBegin(*pPassInfo);
  15795 }
  15796 
  15797 VMA_CALL_PRE VkResult VMA_CALL_POST vmaEndDefragmentationPass(
  15798     VmaAllocator VMA_NOT_NULL allocator,
  15799     VmaDefragmentationContext VMA_NOT_NULL context,
  15800     VmaDefragmentationPassMoveInfo* VMA_NOT_NULL pPassInfo)
  15801 {
  15802     VMA_ASSERT(context && pPassInfo);
  15803 
  15804     VMA_DEBUG_LOG("vmaEndDefragmentationPass");
  15805 
  15806     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15807 
  15808     return context->DefragmentPassEnd(*pPassInfo);
  15809 }
  15810 
  15811 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory(
  15812     VmaAllocator allocator,
  15813     VmaAllocation allocation,
  15814     VkBuffer buffer)
  15815 {
  15816     VMA_ASSERT(allocator && allocation && buffer);
  15817 
  15818     VMA_DEBUG_LOG("vmaBindBufferMemory");
  15819 
  15820     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15821 
  15822     return allocator->BindBufferMemory(allocation, 0, buffer, VMA_NULL);
  15823 }
  15824 
  15825 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindBufferMemory2(
  15826     VmaAllocator allocator,
  15827     VmaAllocation allocation,
  15828     VkDeviceSize allocationLocalOffset,
  15829     VkBuffer buffer,
  15830     const void* pNext)
  15831 {
  15832     VMA_ASSERT(allocator && allocation && buffer);
  15833 
  15834     VMA_DEBUG_LOG("vmaBindBufferMemory2");
  15835 
  15836     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15837 
  15838     return allocator->BindBufferMemory(allocation, allocationLocalOffset, buffer, pNext);
  15839 }
  15840 
  15841 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory(
  15842     VmaAllocator allocator,
  15843     VmaAllocation allocation,
  15844     VkImage image)
  15845 {
  15846     VMA_ASSERT(allocator && allocation && image);
  15847 
  15848     VMA_DEBUG_LOG("vmaBindImageMemory");
  15849 
  15850     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15851 
  15852     return allocator->BindImageMemory(allocation, 0, image, VMA_NULL);
  15853 }
  15854 
  15855 VMA_CALL_PRE VkResult VMA_CALL_POST vmaBindImageMemory2(
  15856     VmaAllocator allocator,
  15857     VmaAllocation allocation,
  15858     VkDeviceSize allocationLocalOffset,
  15859     VkImage image,
  15860     const void* pNext)
  15861 {
  15862     VMA_ASSERT(allocator && allocation && image);
  15863 
  15864     VMA_DEBUG_LOG("vmaBindImageMemory2");
  15865 
  15866     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15867 
  15868         return allocator->BindImageMemory(allocation, allocationLocalOffset, image, pNext);
  15869 }
  15870 
  15871 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBuffer(
  15872     VmaAllocator allocator,
  15873     const VkBufferCreateInfo* pBufferCreateInfo,
  15874     const VmaAllocationCreateInfo* pAllocationCreateInfo,
  15875     VkBuffer* pBuffer,
  15876     VmaAllocation* pAllocation,
  15877     VmaAllocationInfo* pAllocationInfo)
  15878 {
  15879     VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && pBuffer && pAllocation);
  15880 
  15881     if(pBufferCreateInfo->size == 0)
  15882     {
  15883         return VK_ERROR_INITIALIZATION_FAILED;
  15884     }
  15885     if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
  15886         !allocator->m_UseKhrBufferDeviceAddress)
  15887     {
  15888         VMA_ASSERT(0 && "Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
  15889         return VK_ERROR_INITIALIZATION_FAILED;
  15890     }
  15891 
  15892     VMA_DEBUG_LOG("vmaCreateBuffer");
  15893 
  15894     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15895 
  15896     *pBuffer = VK_NULL_HANDLE;
  15897     *pAllocation = VK_NULL_HANDLE;
  15898 
  15899     // 1. Create VkBuffer.
  15900     VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
  15901         allocator->m_hDevice,
  15902         pBufferCreateInfo,
  15903         allocator->GetAllocationCallbacks(),
  15904         pBuffer);
  15905     if(res >= 0)
  15906     {
  15907         // 2. vkGetBufferMemoryRequirements.
  15908         VkMemoryRequirements vkMemReq = {};
  15909         bool requiresDedicatedAllocation = false;
  15910         bool prefersDedicatedAllocation  = false;
  15911         allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
  15912             requiresDedicatedAllocation, prefersDedicatedAllocation);
  15913 
  15914         // 3. Allocate memory using allocator.
  15915         res = allocator->AllocateMemory(
  15916             vkMemReq,
  15917             requiresDedicatedAllocation,
  15918             prefersDedicatedAllocation,
  15919             *pBuffer, // dedicatedBuffer
  15920             VK_NULL_HANDLE, // dedicatedImage
  15921             VmaBufferImageUsage(*pBufferCreateInfo, allocator->m_UseKhrMaintenance5), // dedicatedBufferImageUsage
  15922             *pAllocationCreateInfo,
  15923             VMA_SUBALLOCATION_TYPE_BUFFER,
  15924             1, // allocationCount
  15925             pAllocation);
  15926 
  15927         if(res >= 0)
  15928         {
  15929             // 3. Bind buffer with memory.
  15930             if((pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_DONT_BIND_BIT) == 0)
  15931             {
  15932                 res = allocator->BindBufferMemory(*pAllocation, 0, *pBuffer, VMA_NULL);
  15933             }
  15934             if(res >= 0)
  15935             {
  15936                 // All steps succeeded.
  15937                 #if VMA_STATS_STRING_ENABLED
  15938                     (*pAllocation)->InitBufferUsage(*pBufferCreateInfo, allocator->m_UseKhrMaintenance5);
  15939                 #endif
  15940                 if(pAllocationInfo != VMA_NULL)
  15941                 {
  15942                     allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
  15943                 }
  15944 
  15945                 return VK_SUCCESS;
  15946             }
  15947             allocator->FreeMemory(
  15948                 1, // allocationCount
  15949                 pAllocation);
  15950             *pAllocation = VK_NULL_HANDLE;
  15951             (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
  15952             *pBuffer = VK_NULL_HANDLE;
  15953             return res;
  15954         }
  15955         (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
  15956         *pBuffer = VK_NULL_HANDLE;
  15957         return res;
  15958     }
  15959     return res;
  15960 }
  15961 
  15962 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateBufferWithAlignment(
  15963     VmaAllocator allocator,
  15964     const VkBufferCreateInfo* pBufferCreateInfo,
  15965     const VmaAllocationCreateInfo* pAllocationCreateInfo,
  15966     VkDeviceSize minAlignment,
  15967     VkBuffer* pBuffer,
  15968     VmaAllocation* pAllocation,
  15969     VmaAllocationInfo* pAllocationInfo)
  15970 {
  15971     VMA_ASSERT(allocator && pBufferCreateInfo && pAllocationCreateInfo && VmaIsPow2(minAlignment) && pBuffer && pAllocation);
  15972 
  15973     if(pBufferCreateInfo->size == 0)
  15974     {
  15975         return VK_ERROR_INITIALIZATION_FAILED;
  15976     }
  15977     if((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
  15978         !allocator->m_UseKhrBufferDeviceAddress)
  15979     {
  15980         VMA_ASSERT(0 && "Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
  15981         return VK_ERROR_INITIALIZATION_FAILED;
  15982     }
  15983 
  15984     VMA_DEBUG_LOG("vmaCreateBufferWithAlignment");
  15985 
  15986     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  15987 
  15988     *pBuffer = VK_NULL_HANDLE;
  15989     *pAllocation = VK_NULL_HANDLE;
  15990 
  15991     // 1. Create VkBuffer.
  15992     VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
  15993         allocator->m_hDevice,
  15994         pBufferCreateInfo,
  15995         allocator->GetAllocationCallbacks(),
  15996         pBuffer);
  15997     if(res >= 0)
  15998     {
  15999         // 2. vkGetBufferMemoryRequirements.
  16000         VkMemoryRequirements vkMemReq = {};
  16001         bool requiresDedicatedAllocation = false;
  16002         bool prefersDedicatedAllocation  = false;
  16003         allocator->GetBufferMemoryRequirements(*pBuffer, vkMemReq,
  16004             requiresDedicatedAllocation, prefersDedicatedAllocation);
  16005 
  16006         // 2a. Include minAlignment
  16007         vkMemReq.alignment = VMA_MAX(vkMemReq.alignment, minAlignment);
  16008 
  16009         // 3. Allocate memory using allocator.
  16010         res = allocator->AllocateMemory(
  16011             vkMemReq,
  16012             requiresDedicatedAllocation,
  16013             prefersDedicatedAllocation,
  16014             *pBuffer, // dedicatedBuffer
  16015             VK_NULL_HANDLE, // dedicatedImage
  16016             VmaBufferImageUsage(*pBufferCreateInfo, allocator->m_UseKhrMaintenance5), // dedicatedBufferImageUsage
  16017             *pAllocationCreateInfo,
  16018             VMA_SUBALLOCATION_TYPE_BUFFER,
  16019             1, // allocationCount
  16020             pAllocation);
  16021 
  16022         if(res >= 0)
  16023         {
  16024             // 3. Bind buffer with memory.
  16025             if((pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_DONT_BIND_BIT) == 0)
  16026             {
  16027                 res = allocator->BindBufferMemory(*pAllocation, 0, *pBuffer, VMA_NULL);
  16028             }
  16029             if(res >= 0)
  16030             {
  16031                 // All steps succeeded.
  16032                 #if VMA_STATS_STRING_ENABLED
  16033                     (*pAllocation)->InitBufferUsage(*pBufferCreateInfo, allocator->m_UseKhrMaintenance5);
  16034                 #endif
  16035                 if(pAllocationInfo != VMA_NULL)
  16036                 {
  16037                     allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
  16038                 }
  16039 
  16040                 return VK_SUCCESS;
  16041             }
  16042             allocator->FreeMemory(
  16043                 1, // allocationCount
  16044                 pAllocation);
  16045             *pAllocation = VK_NULL_HANDLE;
  16046             (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
  16047             *pBuffer = VK_NULL_HANDLE;
  16048             return res;
  16049         }
  16050         (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
  16051         *pBuffer = VK_NULL_HANDLE;
  16052         return res;
  16053     }
  16054     return res;
  16055 }
  16056 
  16057 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingBuffer(
  16058     VmaAllocator VMA_NOT_NULL allocator,
  16059     VmaAllocation VMA_NOT_NULL allocation,
  16060     const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
  16061     VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer)
  16062 {
  16063     return vmaCreateAliasingBuffer2(allocator, allocation, 0, pBufferCreateInfo, pBuffer);
  16064 }
  16065 
  16066 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingBuffer2(
  16067     VmaAllocator VMA_NOT_NULL allocator,
  16068     VmaAllocation VMA_NOT_NULL allocation,
  16069     VkDeviceSize allocationLocalOffset,
  16070     const VkBufferCreateInfo* VMA_NOT_NULL pBufferCreateInfo,
  16071     VkBuffer VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pBuffer)
  16072 {
  16073     VMA_ASSERT(allocator && pBufferCreateInfo && pBuffer && allocation);
  16074     VMA_ASSERT(allocationLocalOffset + pBufferCreateInfo->size <= allocation->GetSize());
  16075 
  16076     VMA_DEBUG_LOG("vmaCreateAliasingBuffer2");
  16077 
  16078     *pBuffer = VK_NULL_HANDLE;
  16079 
  16080     if (pBufferCreateInfo->size == 0)
  16081     {
  16082         return VK_ERROR_INITIALIZATION_FAILED;
  16083     }
  16084     if ((pBufferCreateInfo->usage & VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT_COPY) != 0 &&
  16085         !allocator->m_UseKhrBufferDeviceAddress)
  16086     {
  16087         VMA_ASSERT(0 && "Creating a buffer with VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT is not valid if VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT was not used.");
  16088         return VK_ERROR_INITIALIZATION_FAILED;
  16089     }
  16090 
  16091     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  16092 
  16093     // 1. Create VkBuffer.
  16094     VkResult res = (*allocator->GetVulkanFunctions().vkCreateBuffer)(
  16095         allocator->m_hDevice,
  16096         pBufferCreateInfo,
  16097         allocator->GetAllocationCallbacks(),
  16098         pBuffer);
  16099     if (res >= 0)
  16100     {
  16101         // 2. Bind buffer with memory.
  16102         res = allocator->BindBufferMemory(allocation, allocationLocalOffset, *pBuffer, VMA_NULL);
  16103         if (res >= 0)
  16104         {
  16105             return VK_SUCCESS;
  16106         }
  16107         (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, *pBuffer, allocator->GetAllocationCallbacks());
  16108     }
  16109     return res;
  16110 }
  16111 
  16112 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyBuffer(
  16113     VmaAllocator allocator,
  16114     VkBuffer buffer,
  16115     VmaAllocation allocation)
  16116 {
  16117     VMA_ASSERT(allocator);
  16118 
  16119     if(buffer == VK_NULL_HANDLE && allocation == VK_NULL_HANDLE)
  16120     {
  16121         return;
  16122     }
  16123 
  16124     VMA_DEBUG_LOG("vmaDestroyBuffer");
  16125 
  16126     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  16127 
  16128     if(buffer != VK_NULL_HANDLE)
  16129     {
  16130         (*allocator->GetVulkanFunctions().vkDestroyBuffer)(allocator->m_hDevice, buffer, allocator->GetAllocationCallbacks());
  16131     }
  16132 
  16133     if(allocation != VK_NULL_HANDLE)
  16134     {
  16135         allocator->FreeMemory(
  16136             1, // allocationCount
  16137             &allocation);
  16138     }
  16139 }
  16140 
  16141 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateImage(
  16142     VmaAllocator allocator,
  16143     const VkImageCreateInfo* pImageCreateInfo,
  16144     const VmaAllocationCreateInfo* pAllocationCreateInfo,
  16145     VkImage* pImage,
  16146     VmaAllocation* pAllocation,
  16147     VmaAllocationInfo* pAllocationInfo)
  16148 {
  16149     VMA_ASSERT(allocator && pImageCreateInfo && pAllocationCreateInfo && pImage && pAllocation);
  16150 
  16151     if(pImageCreateInfo->extent.width == 0 ||
  16152         pImageCreateInfo->extent.height == 0 ||
  16153         pImageCreateInfo->extent.depth == 0 ||
  16154         pImageCreateInfo->mipLevels == 0 ||
  16155         pImageCreateInfo->arrayLayers == 0)
  16156     {
  16157         return VK_ERROR_INITIALIZATION_FAILED;
  16158     }
  16159 
  16160     VMA_DEBUG_LOG("vmaCreateImage");
  16161 
  16162     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  16163 
  16164     *pImage = VK_NULL_HANDLE;
  16165     *pAllocation = VK_NULL_HANDLE;
  16166 
  16167     // 1. Create VkImage.
  16168     VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
  16169         allocator->m_hDevice,
  16170         pImageCreateInfo,
  16171         allocator->GetAllocationCallbacks(),
  16172         pImage);
  16173     if(res >= 0)
  16174     {
  16175         VmaSuballocationType suballocType = pImageCreateInfo->tiling == VK_IMAGE_TILING_OPTIMAL ?
  16176             VMA_SUBALLOCATION_TYPE_IMAGE_OPTIMAL :
  16177             VMA_SUBALLOCATION_TYPE_IMAGE_LINEAR;
  16178 
  16179         // 2. Allocate memory using allocator.
  16180         VkMemoryRequirements vkMemReq = {};
  16181         bool requiresDedicatedAllocation = false;
  16182         bool prefersDedicatedAllocation  = false;
  16183         allocator->GetImageMemoryRequirements(*pImage, vkMemReq,
  16184             requiresDedicatedAllocation, prefersDedicatedAllocation);
  16185 
  16186         res = allocator->AllocateMemory(
  16187             vkMemReq,
  16188             requiresDedicatedAllocation,
  16189             prefersDedicatedAllocation,
  16190             VK_NULL_HANDLE, // dedicatedBuffer
  16191             *pImage, // dedicatedImage
  16192             VmaBufferImageUsage(*pImageCreateInfo), // dedicatedBufferImageUsage
  16193             *pAllocationCreateInfo,
  16194             suballocType,
  16195             1, // allocationCount
  16196             pAllocation);
  16197 
  16198         if(res >= 0)
  16199         {
  16200             // 3. Bind image with memory.
  16201             if((pAllocationCreateInfo->flags & VMA_ALLOCATION_CREATE_DONT_BIND_BIT) == 0)
  16202             {
  16203                 res = allocator->BindImageMemory(*pAllocation, 0, *pImage, VMA_NULL);
  16204             }
  16205             if(res >= 0)
  16206             {
  16207                 // All steps succeeded.
  16208                 #if VMA_STATS_STRING_ENABLED
  16209                     (*pAllocation)->InitImageUsage(*pImageCreateInfo);
  16210                 #endif
  16211                 if(pAllocationInfo != VMA_NULL)
  16212                 {
  16213                     allocator->GetAllocationInfo(*pAllocation, pAllocationInfo);
  16214                 }
  16215 
  16216                 return VK_SUCCESS;
  16217             }
  16218             allocator->FreeMemory(
  16219                 1, // allocationCount
  16220                 pAllocation);
  16221             *pAllocation = VK_NULL_HANDLE;
  16222             (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
  16223             *pImage = VK_NULL_HANDLE;
  16224             return res;
  16225         }
  16226         (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
  16227         *pImage = VK_NULL_HANDLE;
  16228         return res;
  16229     }
  16230     return res;
  16231 }
  16232 
  16233 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingImage(
  16234     VmaAllocator VMA_NOT_NULL allocator,
  16235     VmaAllocation VMA_NOT_NULL allocation,
  16236     const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
  16237     VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage)
  16238 {
  16239     return vmaCreateAliasingImage2(allocator, allocation, 0, pImageCreateInfo, pImage);
  16240 }
  16241 
  16242 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateAliasingImage2(
  16243     VmaAllocator VMA_NOT_NULL allocator,
  16244     VmaAllocation VMA_NOT_NULL allocation,
  16245     VkDeviceSize allocationLocalOffset,
  16246     const VkImageCreateInfo* VMA_NOT_NULL pImageCreateInfo,
  16247     VkImage VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pImage)
  16248 {
  16249     VMA_ASSERT(allocator && pImageCreateInfo && pImage && allocation);
  16250 
  16251     *pImage = VK_NULL_HANDLE;
  16252 
  16253     VMA_DEBUG_LOG("vmaCreateImage2");
  16254 
  16255     if (pImageCreateInfo->extent.width == 0 ||
  16256         pImageCreateInfo->extent.height == 0 ||
  16257         pImageCreateInfo->extent.depth == 0 ||
  16258         pImageCreateInfo->mipLevels == 0 ||
  16259         pImageCreateInfo->arrayLayers == 0)
  16260     {
  16261         return VK_ERROR_INITIALIZATION_FAILED;
  16262     }
  16263 
  16264     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  16265 
  16266     // 1. Create VkImage.
  16267     VkResult res = (*allocator->GetVulkanFunctions().vkCreateImage)(
  16268         allocator->m_hDevice,
  16269         pImageCreateInfo,
  16270         allocator->GetAllocationCallbacks(),
  16271         pImage);
  16272     if (res >= 0)
  16273     {
  16274         // 2. Bind image with memory.
  16275         res = allocator->BindImageMemory(allocation, allocationLocalOffset, *pImage, VMA_NULL);
  16276         if (res >= 0)
  16277         {
  16278             return VK_SUCCESS;
  16279         }
  16280         (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, *pImage, allocator->GetAllocationCallbacks());
  16281     }
  16282     return res;
  16283 }
  16284 
  16285 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyImage(
  16286     VmaAllocator VMA_NOT_NULL allocator,
  16287     VkImage VMA_NULLABLE_NON_DISPATCHABLE image,
  16288     VmaAllocation VMA_NULLABLE allocation)
  16289 {
  16290     VMA_ASSERT(allocator);
  16291 
  16292     if(image == VK_NULL_HANDLE && allocation == VK_NULL_HANDLE)
  16293     {
  16294         return;
  16295     }
  16296 
  16297     VMA_DEBUG_LOG("vmaDestroyImage");
  16298 
  16299     VMA_DEBUG_GLOBAL_MUTEX_LOCK
  16300 
  16301     if(image != VK_NULL_HANDLE)
  16302     {
  16303         (*allocator->GetVulkanFunctions().vkDestroyImage)(allocator->m_hDevice, image, allocator->GetAllocationCallbacks());
  16304     }
  16305     if(allocation != VK_NULL_HANDLE)
  16306     {
  16307         allocator->FreeMemory(
  16308             1, // allocationCount
  16309             &allocation);
  16310     }
  16311 }
  16312 
  16313 VMA_CALL_PRE VkResult VMA_CALL_POST vmaCreateVirtualBlock(
  16314     const VmaVirtualBlockCreateInfo* VMA_NOT_NULL pCreateInfo,
  16315     VmaVirtualBlock VMA_NULLABLE * VMA_NOT_NULL pVirtualBlock)
  16316 {
  16317     VMA_ASSERT(pCreateInfo && pVirtualBlock);
  16318     VMA_ASSERT(pCreateInfo->size > 0);
  16319     VMA_DEBUG_LOG("vmaCreateVirtualBlock");
  16320     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
  16321     *pVirtualBlock = vma_new(pCreateInfo->pAllocationCallbacks, VmaVirtualBlock_T)(*pCreateInfo);
  16322     VkResult res = (*pVirtualBlock)->Init();
  16323     if(res < 0)
  16324     {
  16325         vma_delete(pCreateInfo->pAllocationCallbacks, *pVirtualBlock);
  16326         *pVirtualBlock = VK_NULL_HANDLE;
  16327     }
  16328     return res;
  16329 }
  16330 
  16331 VMA_CALL_PRE void VMA_CALL_POST vmaDestroyVirtualBlock(VmaVirtualBlock VMA_NULLABLE virtualBlock)
  16332 {
  16333     if(virtualBlock != VK_NULL_HANDLE)
  16334     {
  16335         VMA_DEBUG_LOG("vmaDestroyVirtualBlock");
  16336         VMA_DEBUG_GLOBAL_MUTEX_LOCK;
  16337         VkAllocationCallbacks allocationCallbacks = virtualBlock->m_AllocationCallbacks; // Have to copy the callbacks when destroying.
  16338         vma_delete(&allocationCallbacks, virtualBlock);
  16339     }
  16340 }
  16341 
  16342 VMA_CALL_PRE VkBool32 VMA_CALL_POST vmaIsVirtualBlockEmpty(VmaVirtualBlock VMA_NOT_NULL virtualBlock)
  16343 {
  16344     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
  16345     VMA_DEBUG_LOG("vmaIsVirtualBlockEmpty");
  16346     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
  16347     return virtualBlock->IsEmpty() ? VK_TRUE : VK_FALSE;
  16348 }
  16349 
  16350 VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualAllocationInfo(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
  16351     VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation, VmaVirtualAllocationInfo* VMA_NOT_NULL pVirtualAllocInfo)
  16352 {
  16353     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pVirtualAllocInfo != VMA_NULL);
  16354     VMA_DEBUG_LOG("vmaGetVirtualAllocationInfo");
  16355     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
  16356     virtualBlock->GetAllocationInfo(allocation, *pVirtualAllocInfo);
  16357 }
  16358 
  16359 VMA_CALL_PRE VkResult VMA_CALL_POST vmaVirtualAllocate(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
  16360     const VmaVirtualAllocationCreateInfo* VMA_NOT_NULL pCreateInfo, VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE* VMA_NOT_NULL pAllocation,
  16361     VkDeviceSize* VMA_NULLABLE pOffset)
  16362 {
  16363     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pCreateInfo != VMA_NULL && pAllocation != VMA_NULL);
  16364     VMA_DEBUG_LOG("vmaVirtualAllocate");
  16365     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
  16366     return virtualBlock->Allocate(*pCreateInfo, *pAllocation, pOffset);
  16367 }
  16368 
  16369 VMA_CALL_PRE void VMA_CALL_POST vmaVirtualFree(VmaVirtualBlock VMA_NOT_NULL virtualBlock, VmaVirtualAllocation VMA_NULLABLE_NON_DISPATCHABLE allocation)
  16370 {
  16371     if(allocation != VK_NULL_HANDLE)
  16372     {
  16373         VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
  16374         VMA_DEBUG_LOG("vmaVirtualFree");
  16375         VMA_DEBUG_GLOBAL_MUTEX_LOCK;
  16376         virtualBlock->Free(allocation);
  16377     }
  16378 }
  16379 
  16380 VMA_CALL_PRE void VMA_CALL_POST vmaClearVirtualBlock(VmaVirtualBlock VMA_NOT_NULL virtualBlock)
  16381 {
  16382     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
  16383     VMA_DEBUG_LOG("vmaClearVirtualBlock");
  16384     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
  16385     virtualBlock->Clear();
  16386 }
  16387 
  16388 VMA_CALL_PRE void VMA_CALL_POST vmaSetVirtualAllocationUserData(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
  16389     VmaVirtualAllocation VMA_NOT_NULL_NON_DISPATCHABLE allocation, void* VMA_NULLABLE pUserData)
  16390 {
  16391     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
  16392     VMA_DEBUG_LOG("vmaSetVirtualAllocationUserData");
  16393     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
  16394     virtualBlock->SetAllocationUserData(allocation, pUserData);
  16395 }
  16396 
  16397 VMA_CALL_PRE void VMA_CALL_POST vmaGetVirtualBlockStatistics(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
  16398     VmaStatistics* VMA_NOT_NULL pStats)
  16399 {
  16400     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pStats != VMA_NULL);
  16401     VMA_DEBUG_LOG("vmaGetVirtualBlockStatistics");
  16402     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
  16403     virtualBlock->GetStatistics(*pStats);
  16404 }
  16405 
  16406 VMA_CALL_PRE void VMA_CALL_POST vmaCalculateVirtualBlockStatistics(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
  16407     VmaDetailedStatistics* VMA_NOT_NULL pStats)
  16408 {
  16409     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && pStats != VMA_NULL);
  16410     VMA_DEBUG_LOG("vmaCalculateVirtualBlockStatistics");
  16411     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
  16412     virtualBlock->CalculateDetailedStatistics(*pStats);
  16413 }
  16414 
  16415 #if VMA_STATS_STRING_ENABLED
  16416 
  16417 VMA_CALL_PRE void VMA_CALL_POST vmaBuildVirtualBlockStatsString(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
  16418     char* VMA_NULLABLE * VMA_NOT_NULL ppStatsString, VkBool32 detailedMap)
  16419 {
  16420     VMA_ASSERT(virtualBlock != VK_NULL_HANDLE && ppStatsString != VMA_NULL);
  16421     VMA_DEBUG_GLOBAL_MUTEX_LOCK;
  16422     const VkAllocationCallbacks* allocationCallbacks = virtualBlock->GetAllocationCallbacks();
  16423     VmaStringBuilder sb(allocationCallbacks);
  16424     virtualBlock->BuildStatsString(detailedMap != VK_FALSE, sb);
  16425     *ppStatsString = VmaCreateStringCopy(allocationCallbacks, sb.GetData(), sb.GetLength());
  16426 }
  16427 
  16428 VMA_CALL_PRE void VMA_CALL_POST vmaFreeVirtualBlockStatsString(VmaVirtualBlock VMA_NOT_NULL virtualBlock,
  16429     char* VMA_NULLABLE pStatsString)
  16430 {
  16431     if(pStatsString != VMA_NULL)
  16432     {
  16433         VMA_ASSERT(virtualBlock != VK_NULL_HANDLE);
  16434         VMA_DEBUG_GLOBAL_MUTEX_LOCK;
  16435         VmaFreeString(virtualBlock->GetAllocationCallbacks(), pStatsString);
  16436     }
  16437 }
  16438 #endif // VMA_STATS_STRING_ENABLED
  16439 #endif // _VMA_PUBLIC_INTERFACE
  16440 #endif // VMA_IMPLEMENTATION
  16441 
  16442 /**
  16443 \page quick_start Quick start
  16444 
  16445 \section quick_start_project_setup Project setup
  16446 
  16447 Vulkan Memory Allocator comes in form of a "stb-style" single header file.
  16448 While you can pull the entire repository e.g. as Git module, there is also Cmake script provided,
  16449 you don't need to build it as a separate library project.
  16450 You can add file "vk_mem_alloc.h" directly to your project and submit it to code repository next to your other source files.
  16451 
  16452 "Single header" doesn't mean that everything is contained in C/C++ declarations,
  16453 like it tends to be in case of inline functions or C++ templates.
  16454 It means that implementation is bundled with interface in a single file and needs to be extracted using preprocessor macro.
  16455 If you don't do it properly, it will result in linker errors.
  16456 
  16457 To do it properly:
  16458 
  16459 -# Include "vk_mem_alloc.h" file in each CPP file where you want to use the library.
  16460    This includes declarations of all members of the library.
  16461 -# In exactly one CPP file define following macro before this include.
  16462    It enables also internal definitions.
  16463 
  16464 \code
  16465 #define VMA_IMPLEMENTATION
  16466 #include "vk_mem_alloc.h"
  16467 \endcode
  16468 
  16469 It may be a good idea to create dedicated CPP file just for this purpose, e.g. "VmaUsage.cpp".
  16470 
  16471 This library includes header `<vulkan/vulkan.h>`, which in turn
  16472 includes `<windows.h>` on Windows. If you need some specific macros defined
  16473 before including these headers (like `WIN32_LEAN_AND_MEAN` or
  16474 `WINVER` for Windows, `VK_USE_PLATFORM_WIN32_KHR` for Vulkan), you must define
  16475 them before every `#include` of this library.
  16476 It may be a good idea to create a dedicate header file for this purpose, e.g. "VmaUsage.h",
  16477 that will be included in other source files instead of VMA header directly.
  16478 
  16479 This library is written in C++, but has C-compatible interface.
  16480 Thus, you can include and use "vk_mem_alloc.h" in C or C++ code, but full
  16481 implementation with `VMA_IMPLEMENTATION` macro must be compiled as C++, NOT as C.
  16482 Some features of C++14 are used and required. Features of C++20 are used optionally when available.
  16483 Some headers of standard C and C++ library are used, but STL containers, RTTI, or C++ exceptions are not used.
  16484 
  16485 
  16486 \section quick_start_initialization Initialization
  16487 
  16488 VMA offers library interface in a style similar to Vulkan, with object handles like #VmaAllocation,
  16489 structures describing parameters of objects to be created like #VmaAllocationCreateInfo,
  16490 and errors codes returned from functions using `VkResult` type.
  16491 
  16492 The first and the main object that needs to be created is #VmaAllocator.
  16493 It represents the initialization of the entire library.
  16494 Only one such object should be created per `VkDevice`.
  16495 You should create it at program startup, after `VkDevice` was created, and before any device memory allocator needs to be made.
  16496 It must be destroyed before `VkDevice` is destroyed.
  16497 
  16498 At program startup:
  16499 
  16500 -# Initialize Vulkan to have `VkInstance`, `VkPhysicalDevice`, `VkDevice` object.
  16501 -# Fill VmaAllocatorCreateInfo structure and call vmaCreateAllocator() to create #VmaAllocator object.
  16502 
  16503 Only members `physicalDevice`, `device`, `instance` are required.
  16504 However, you should inform the library which Vulkan version do you use by setting
  16505 VmaAllocatorCreateInfo::vulkanApiVersion and which extensions did you enable
  16506 by setting VmaAllocatorCreateInfo::flags.
  16507 Otherwise, VMA would use only features of Vulkan 1.0 core with no extensions.
  16508 See below for details.
  16509 
  16510 \subsection quick_start_initialization_selecting_vulkan_version Selecting Vulkan version
  16511 
  16512 VMA supports Vulkan version down to 1.0, for backward compatibility.
  16513 If you want to use higher version, you need to inform the library about it.
  16514 This is a two-step process.
  16515 
  16516 <b>Step 1: Compile time.</b> By default, VMA compiles with code supporting the highest
  16517 Vulkan version found in the included `<vulkan/vulkan.h>` that is also supported by the library.
  16518 If this is OK, you don't need to do anything.
  16519 However, if you want to compile VMA as if only some lower Vulkan version was available,
  16520 define macro `VMA_VULKAN_VERSION` before every `#include "vk_mem_alloc.h"`.
  16521 It should have decimal numeric value in form of ABBBCCC, where A = major, BBB = minor, CCC = patch Vulkan version.
  16522 For example, to compile against Vulkan 1.2:
  16523 
  16524 \code
  16525 #define VMA_VULKAN_VERSION 1002000 // Vulkan 1.2
  16526 #include "vk_mem_alloc.h"
  16527 \endcode
  16528 
  16529 <b>Step 2: Runtime.</b> Even when compiled with higher Vulkan version available,
  16530 VMA can use only features of a lower version, which is configurable during creation of the #VmaAllocator object.
  16531 By default, only Vulkan 1.0 is used.
  16532 To initialize the allocator with support for higher Vulkan version, you need to set member
  16533 VmaAllocatorCreateInfo::vulkanApiVersion to an appropriate value, e.g. using constants like `VK_API_VERSION_1_2`.
  16534 See code sample below.
  16535 
  16536 \subsection quick_start_initialization_importing_vulkan_functions Importing Vulkan functions
  16537 
  16538 You may need to configure importing Vulkan functions. There are 3 ways to do this:
  16539 
  16540 -# **If you link with Vulkan static library** (e.g. "vulkan-1.lib" on Windows):
  16541    - You don't need to do anything.
  16542    - VMA will use these, as macro `VMA_STATIC_VULKAN_FUNCTIONS` is defined to 1 by default.
  16543 -# **If you want VMA to fetch pointers to Vulkan functions dynamically** using `vkGetInstanceProcAddr`,
  16544    `vkGetDeviceProcAddr` (this is the option presented in the example below):
  16545    - Define `VMA_STATIC_VULKAN_FUNCTIONS` to 0, `VMA_DYNAMIC_VULKAN_FUNCTIONS` to 1.
  16546    - Provide pointers to these two functions via VmaVulkanFunctions::vkGetInstanceProcAddr,
  16547      VmaVulkanFunctions::vkGetDeviceProcAddr.
  16548    - The library will fetch pointers to all other functions it needs internally.
  16549 -# **If you fetch pointers to all Vulkan functions in a custom way**, e.g. using some loader like
  16550    [Volk](https://github.com/zeux/volk):
  16551    - Define `VMA_STATIC_VULKAN_FUNCTIONS` and `VMA_DYNAMIC_VULKAN_FUNCTIONS` to 0.
  16552    - Pass these pointers via structure #VmaVulkanFunctions.
  16553 
  16554 \subsection quick_start_initialization_enabling_extensions Enabling extensions
  16555 
  16556 VMA can automatically use following Vulkan extensions.
  16557 If you found them available on the selected physical device and you enabled them
  16558 while creating `VkInstance` / `VkDevice` object, inform VMA about their availability
  16559 by setting appropriate flags in VmaAllocatorCreateInfo::flags.
  16560 
  16561 Vulkan extension              | VMA flag
  16562 ------------------------------|-----------------------------------------------------
  16563 VK_KHR_dedicated_allocation   | #VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT
  16564 VK_KHR_bind_memory2           | #VMA_ALLOCATOR_CREATE_KHR_BIND_MEMORY2_BIT
  16565 VK_KHR_maintenance4           | #VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE4_BIT
  16566 VK_KHR_maintenance5           | #VMA_ALLOCATOR_CREATE_KHR_MAINTENANCE5_BIT
  16567 VK_EXT_memory_budget          | #VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT
  16568 VK_KHR_buffer_device_address  | #VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT
  16569 VK_EXT_memory_priority        | #VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT
  16570 VK_AMD_device_coherent_memory | #VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT
  16571 
  16572 Example with fetching pointers to Vulkan functions dynamically:
  16573 
  16574 \code
  16575 #define VMA_STATIC_VULKAN_FUNCTIONS 0
  16576 #define VMA_DYNAMIC_VULKAN_FUNCTIONS 1
  16577 #include "vk_mem_alloc.h"
  16578 
  16579 ...
  16580 
  16581 VmaVulkanFunctions vulkanFunctions = {};
  16582 vulkanFunctions.vkGetInstanceProcAddr = &vkGetInstanceProcAddr;
  16583 vulkanFunctions.vkGetDeviceProcAddr = &vkGetDeviceProcAddr;
  16584 
  16585 VmaAllocatorCreateInfo allocatorCreateInfo = {};
  16586 allocatorCreateInfo.flags = VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT;
  16587 allocatorCreateInfo.vulkanApiVersion = VK_API_VERSION_1_2;
  16588 allocatorCreateInfo.physicalDevice = physicalDevice;
  16589 allocatorCreateInfo.device = device;
  16590 allocatorCreateInfo.instance = instance;
  16591 allocatorCreateInfo.pVulkanFunctions = &vulkanFunctions;
  16592 
  16593 VmaAllocator allocator;
  16594 vmaCreateAllocator(&allocatorCreateInfo, &allocator);
  16595 
  16596 // Entire program...
  16597 
  16598 // At the end, don't forget to:
  16599 vmaDestroyAllocator(allocator);
  16600 \endcode
  16601 
  16602 
  16603 \subsection quick_start_initialization_other_config Other configuration options
  16604 
  16605 There are additional configuration options available through preprocessor macros that you can define
  16606 before including VMA header and through parameters passed in #VmaAllocatorCreateInfo.
  16607 They include a possibility to use your own callbacks for host memory allocations (`VkAllocationCallbacks`),
  16608 callbacks for device memory allocations (instead of `vkAllocateMemory`, `vkFreeMemory`),
  16609 or your custom `VMA_ASSERT` macro, among others.
  16610 For more information, see: @ref configuration.
  16611 
  16612 
  16613 \section quick_start_resource_allocation Resource allocation
  16614 
  16615 When you want to create a buffer or image:
  16616 
  16617 -# Fill `VkBufferCreateInfo` / `VkImageCreateInfo` structure.
  16618 -# Fill VmaAllocationCreateInfo structure.
  16619 -# Call vmaCreateBuffer() / vmaCreateImage() to get `VkBuffer`/`VkImage` with memory
  16620    already allocated and bound to it, plus #VmaAllocation objects that represents its underlying memory.
  16621 
  16622 \code
  16623 VkBufferCreateInfo bufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
  16624 bufferInfo.size = 65536;
  16625 bufferInfo.usage = VK_BUFFER_USAGE_VERTEX_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
  16626 
  16627 VmaAllocationCreateInfo allocInfo = {};
  16628 allocInfo.usage = VMA_MEMORY_USAGE_AUTO;
  16629 
  16630 VkBuffer buffer;
  16631 VmaAllocation allocation;
  16632 vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
  16633 \endcode
  16634 
  16635 Don't forget to destroy your buffer and allocation objects when no longer needed:
  16636 
  16637 \code
  16638 vmaDestroyBuffer(allocator, buffer, allocation);
  16639 \endcode
  16640 
  16641 If you need to map the buffer, you must set flag
  16642 #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
  16643 in VmaAllocationCreateInfo::flags.
  16644 There are many additional parameters that can control the choice of memory type to be used for the allocation
  16645 and other features.
  16646 For more information, see documentation chapters: @ref choosing_memory_type, @ref memory_mapping.
  16647 
  16648 
  16649 \page choosing_memory_type Choosing memory type
  16650 
  16651 Physical devices in Vulkan support various combinations of memory heaps and
  16652 types. Help with choosing correct and optimal memory type for your specific
  16653 resource is one of the key features of this library. You can use it by filling
  16654 appropriate members of VmaAllocationCreateInfo structure, as described below.
  16655 You can also combine multiple methods.
  16656 
  16657 -# If you just want to find memory type index that meets your requirements, you
  16658    can use function: vmaFindMemoryTypeIndexForBufferInfo(),
  16659    vmaFindMemoryTypeIndexForImageInfo(), vmaFindMemoryTypeIndex().
  16660 -# If you want to allocate a region of device memory without association with any
  16661    specific image or buffer, you can use function vmaAllocateMemory(). Usage of
  16662    this function is not recommended and usually not needed.
  16663    vmaAllocateMemoryPages() function is also provided for creating multiple allocations at once,
  16664    which may be useful for sparse binding.
  16665 -# If you already have a buffer or an image created, you want to allocate memory
  16666    for it and then you will bind it yourself, you can use function
  16667    vmaAllocateMemoryForBuffer(), vmaAllocateMemoryForImage().
  16668    For binding you should use functions: vmaBindBufferMemory(), vmaBindImageMemory()
  16669    or their extended versions: vmaBindBufferMemory2(), vmaBindImageMemory2().
  16670 -# If you want to create a buffer or an image, allocate memory for it, and bind
  16671    them together, all in one call, you can use function vmaCreateBuffer(),
  16672    vmaCreateImage().
  16673    <b>This is the easiest and recommended way to use this library!</b>
  16674 
  16675 When using 3. or 4., the library internally queries Vulkan for memory types
  16676 supported for that buffer or image (function `vkGetBufferMemoryRequirements()`)
  16677 and uses only one of these types.
  16678 
  16679 If no memory type can be found that meets all the requirements, these functions
  16680 return `VK_ERROR_FEATURE_NOT_PRESENT`.
  16681 
  16682 You can leave VmaAllocationCreateInfo structure completely filled with zeros.
  16683 It means no requirements are specified for memory type.
  16684 It is valid, although not very useful.
  16685 
  16686 \section choosing_memory_type_usage Usage
  16687 
  16688 The easiest way to specify memory requirements is to fill member
  16689 VmaAllocationCreateInfo::usage using one of the values of enum #VmaMemoryUsage.
  16690 It defines high level, common usage types.
  16691 Since version 3 of the library, it is recommended to use #VMA_MEMORY_USAGE_AUTO to let it select best memory type for your resource automatically.
  16692 
  16693 For example, if you want to create a uniform buffer that will be filled using
  16694 transfer only once or infrequently and then used for rendering every frame as a uniform buffer, you can
  16695 do it using following code. The buffer will most likely end up in a memory type with
  16696 `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT` to be fast to access by the GPU device.
  16697 
  16698 \code
  16699 VkBufferCreateInfo bufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
  16700 bufferInfo.size = 65536;
  16701 bufferInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
  16702 
  16703 VmaAllocationCreateInfo allocInfo = {};
  16704 allocInfo.usage = VMA_MEMORY_USAGE_AUTO;
  16705 
  16706 VkBuffer buffer;
  16707 VmaAllocation allocation;
  16708 vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
  16709 \endcode
  16710 
  16711 If you have a preference for putting the resource in GPU (device) memory or CPU (host) memory
  16712 on systems with discrete graphics card that have the memories separate, you can use
  16713 #VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE or #VMA_MEMORY_USAGE_AUTO_PREFER_HOST.
  16714 
  16715 When using `VMA_MEMORY_USAGE_AUTO*` while you want to map the allocated memory,
  16716 you also need to specify one of the host access flags:
  16717 #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
  16718 This will help the library decide about preferred memory type to ensure it has `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`
  16719 so you can map it.
  16720 
  16721 For example, a staging buffer that will be filled via mapped pointer and then
  16722 used as a source of transfer to the buffer described previously can be created like this.
  16723 It will likely end up in a memory type that is `HOST_VISIBLE` and `HOST_COHERENT`
  16724 but not `HOST_CACHED` (meaning uncached, write-combined) and not `DEVICE_LOCAL` (meaning system RAM).
  16725 
  16726 \code
  16727 VkBufferCreateInfo stagingBufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
  16728 stagingBufferInfo.size = 65536;
  16729 stagingBufferInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
  16730 
  16731 VmaAllocationCreateInfo stagingAllocInfo = {};
  16732 stagingAllocInfo.usage = VMA_MEMORY_USAGE_AUTO;
  16733 stagingAllocInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT;
  16734 
  16735 VkBuffer stagingBuffer;
  16736 VmaAllocation stagingAllocation;
  16737 vmaCreateBuffer(allocator, &stagingBufferInfo, &stagingAllocInfo, &stagingBuffer, &stagingAllocation, nullptr);
  16738 \endcode
  16739 
  16740 For more examples of creating different kinds of resources, see chapter \ref usage_patterns.
  16741 See also: @ref memory_mapping.
  16742 
  16743 Usage values `VMA_MEMORY_USAGE_AUTO*` are legal to use only when the library knows
  16744 about the resource being created by having `VkBufferCreateInfo` / `VkImageCreateInfo` passed,
  16745 so they work with functions like: vmaCreateBuffer(), vmaCreateImage(), vmaFindMemoryTypeIndexForBufferInfo() etc.
  16746 If you allocate raw memory using function vmaAllocateMemory(), you have to use other means of selecting
  16747 memory type, as described below.
  16748 
  16749 \note
  16750 Old usage values (`VMA_MEMORY_USAGE_GPU_ONLY`, `VMA_MEMORY_USAGE_CPU_ONLY`,
  16751 `VMA_MEMORY_USAGE_CPU_TO_GPU`, `VMA_MEMORY_USAGE_GPU_TO_CPU`, `VMA_MEMORY_USAGE_CPU_COPY`)
  16752 are still available and work same way as in previous versions of the library
  16753 for backward compatibility, but they are deprecated.
  16754 
  16755 \section choosing_memory_type_required_preferred_flags Required and preferred flags
  16756 
  16757 You can specify more detailed requirements by filling members
  16758 VmaAllocationCreateInfo::requiredFlags and VmaAllocationCreateInfo::preferredFlags
  16759 with a combination of bits from enum `VkMemoryPropertyFlags`. For example,
  16760 if you want to create a buffer that will be persistently mapped on host (so it
  16761 must be `HOST_VISIBLE`) and preferably will also be `HOST_COHERENT` and `HOST_CACHED`,
  16762 use following code:
  16763 
  16764 \code
  16765 VmaAllocationCreateInfo allocInfo = {};
  16766 allocInfo.requiredFlags = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
  16767 allocInfo.preferredFlags = VK_MEMORY_PROPERTY_HOST_COHERENT_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
  16768 allocInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT | VMA_ALLOCATION_CREATE_MAPPED_BIT;
  16769 
  16770 VkBuffer buffer;
  16771 VmaAllocation allocation;
  16772 vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
  16773 \endcode
  16774 
  16775 A memory type is chosen that has all the required flags and as many preferred
  16776 flags set as possible.
  16777 
  16778 Value passed in VmaAllocationCreateInfo::usage is internally converted to a set of required and preferred flags,
  16779 plus some extra "magic" (heuristics).
  16780 
  16781 \section choosing_memory_type_explicit_memory_types Explicit memory types
  16782 
  16783 If you inspected memory types available on the physical device and <b>you have
  16784 a preference for memory types that you want to use</b>, you can fill member
  16785 VmaAllocationCreateInfo::memoryTypeBits. It is a bit mask, where each bit set
  16786 means that a memory type with that index is allowed to be used for the
  16787 allocation. Special value 0, just like `UINT32_MAX`, means there are no
  16788 restrictions to memory type index.
  16789 
  16790 Please note that this member is NOT just a memory type index.
  16791 Still you can use it to choose just one, specific memory type.
  16792 For example, if you already determined that your buffer should be created in
  16793 memory type 2, use following code:
  16794 
  16795 \code
  16796 uint32_t memoryTypeIndex = 2;
  16797 
  16798 VmaAllocationCreateInfo allocInfo = {};
  16799 allocInfo.memoryTypeBits = 1u << memoryTypeIndex;
  16800 
  16801 VkBuffer buffer;
  16802 VmaAllocation allocation;
  16803 vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
  16804 \endcode
  16805 
  16806 You can also use this parameter to <b>exclude some memory types</b>.
  16807 If you inspect memory heaps and types available on the current physical device and
  16808 you determine that for some reason you don't want to use a specific memory type for the allocation,
  16809 you can enable automatic memory type selection but exclude certain memory type or types
  16810 by setting all bits of `memoryTypeBits` to 1 except the ones you choose.
  16811 
  16812 \code
  16813 // ...
  16814 uint32_t excludedMemoryTypeIndex = 2;
  16815 VmaAllocationCreateInfo allocInfo = {};
  16816 allocInfo.usage = VMA_MEMORY_USAGE_AUTO;
  16817 allocInfo.memoryTypeBits = ~(1u << excludedMemoryTypeIndex);
  16818 // ...
  16819 \endcode
  16820 
  16821 
  16822 \section choosing_memory_type_custom_memory_pools Custom memory pools
  16823 
  16824 If you allocate from custom memory pool, all the ways of specifying memory
  16825 requirements described above are not applicable and the aforementioned members
  16826 of VmaAllocationCreateInfo structure are ignored. Memory type is selected
  16827 explicitly when creating the pool and then used to make all the allocations from
  16828 that pool. For further details, see \ref custom_memory_pools.
  16829 
  16830 \section choosing_memory_type_dedicated_allocations Dedicated allocations
  16831 
  16832 Memory for allocations is reserved out of larger block of `VkDeviceMemory`
  16833 allocated from Vulkan internally. That is the main feature of this whole library.
  16834 You can still request a separate memory block to be created for an allocation,
  16835 just like you would do in a trivial solution without using any allocator.
  16836 In that case, a buffer or image is always bound to that memory at offset 0.
  16837 This is called a "dedicated allocation".
  16838 You can explicitly request it by using flag #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
  16839 The library can also internally decide to use dedicated allocation in some cases, e.g.:
  16840 
  16841 - When the size of the allocation is large.
  16842 - When [VK_KHR_dedicated_allocation](@ref vk_khr_dedicated_allocation) extension is enabled
  16843   and it reports that dedicated allocation is required or recommended for the resource.
  16844 - When allocation of next big memory block fails due to not enough device memory,
  16845   but allocation with the exact requested size succeeds.
  16846 
  16847 
  16848 \page memory_mapping Memory mapping
  16849 
  16850 To "map memory" in Vulkan means to obtain a CPU pointer to `VkDeviceMemory`,
  16851 to be able to read from it or write to it in CPU code.
  16852 Mapping is possible only of memory allocated from a memory type that has
  16853 `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT` flag.
  16854 Functions `vkMapMemory()`, `vkUnmapMemory()` are designed for this purpose.
  16855 You can use them directly with memory allocated by this library,
  16856 but it is not recommended because of following issue:
  16857 Mapping the same `VkDeviceMemory` block multiple times is illegal - only one mapping at a time is allowed.
  16858 This includes mapping disjoint regions. Mapping is not reference-counted internally by Vulkan.
  16859 It is also not thread-safe.
  16860 Because of this, Vulkan Memory Allocator provides following facilities:
  16861 
  16862 \note If you want to be able to map an allocation, you need to specify one of the flags
  16863 #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT
  16864 in VmaAllocationCreateInfo::flags. These flags are required for an allocation to be mappable
  16865 when using #VMA_MEMORY_USAGE_AUTO or other `VMA_MEMORY_USAGE_AUTO*` enum values.
  16866 For other usage values they are ignored and every such allocation made in `HOST_VISIBLE` memory type is mappable,
  16867 but these flags can still be used for consistency.
  16868 
  16869 \section memory_mapping_copy_functions Copy functions
  16870 
  16871 The easiest way to copy data from a host pointer to an allocation is to use convenience function vmaCopyMemoryToAllocation().
  16872 It automatically maps the Vulkan memory temporarily (if not already mapped), performs `memcpy`,
  16873 and calls `vkFlushMappedMemoryRanges` (if required - if memory type is not `HOST_COHERENT`).
  16874 
  16875 It is also the safest one, because using `memcpy` avoids a risk of accidentally introducing memory reads
  16876 (e.g. by doing `pMappedVectors[i] += v`), which may be very slow on memory types that are not `HOST_CACHED`.
  16877 
  16878 \code
  16879 struct ConstantBuffer
  16880 {
  16881     ...
  16882 };
  16883 ConstantBuffer constantBufferData = ...
  16884 
  16885 VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
  16886 bufCreateInfo.size = sizeof(ConstantBuffer);
  16887 bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
  16888 
  16889 VmaAllocationCreateInfo allocCreateInfo = {};
  16890 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
  16891 allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT;
  16892 
  16893 VkBuffer buf;
  16894 VmaAllocation alloc;
  16895 vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, nullptr);
  16896 
  16897 vmaCopyMemoryToAllocation(allocator, &constantBufferData, alloc, 0, sizeof(ConstantBuffer));
  16898 \endcode
  16899 
  16900 Copy in the other direction - from an allocation to a host pointer can be performed the same way using function vmaCopyAllocationToMemory().
  16901 
  16902 \section memory_mapping_mapping_functions Mapping functions
  16903 
  16904 The library provides following functions for mapping of a specific allocation: vmaMapMemory(), vmaUnmapMemory().
  16905 They are safer and more convenient to use than standard Vulkan functions.
  16906 You can map an allocation multiple times simultaneously - mapping is reference-counted internally.
  16907 You can also map different allocations simultaneously regardless of whether they use the same `VkDeviceMemory` block.
  16908 The way it is implemented is that the library always maps entire memory block, not just region of the allocation.
  16909 For further details, see description of vmaMapMemory() function.
  16910 Example:
  16911 
  16912 \code
  16913 // Having these objects initialized:
  16914 struct ConstantBuffer
  16915 {
  16916     ...
  16917 };
  16918 ConstantBuffer constantBufferData = ...
  16919 
  16920 VmaAllocator allocator = ...
  16921 VkBuffer constantBuffer = ...
  16922 VmaAllocation constantBufferAllocation = ...
  16923 
  16924 // You can map and fill your buffer using following code:
  16925 
  16926 void* mappedData;
  16927 vmaMapMemory(allocator, constantBufferAllocation, &mappedData);
  16928 memcpy(mappedData, &constantBufferData, sizeof(constantBufferData));
  16929 vmaUnmapMemory(allocator, constantBufferAllocation);
  16930 \endcode
  16931 
  16932 When mapping, you may see a warning from Vulkan validation layer similar to this one:
  16933 
  16934 <i>Mapping an image with layout VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL can result in undefined behavior if this memory is used by the device. Only GENERAL or PREINITIALIZED should be used.</i>
  16935 
  16936 It happens because the library maps entire `VkDeviceMemory` block, where different
  16937 types of images and buffers may end up together, especially on GPUs with unified memory like Intel.
  16938 You can safely ignore it if you are sure you access only memory of the intended
  16939 object that you wanted to map.
  16940 
  16941 
  16942 \section memory_mapping_persistently_mapped_memory Persistently mapped memory
  16943 
  16944 Keeping your memory persistently mapped is generally OK in Vulkan.
  16945 You don't need to unmap it before using its data on the GPU.
  16946 The library provides a special feature designed for that:
  16947 Allocations made with #VMA_ALLOCATION_CREATE_MAPPED_BIT flag set in
  16948 VmaAllocationCreateInfo::flags stay mapped all the time,
  16949 so you can just access CPU pointer to it any time
  16950 without a need to call any "map" or "unmap" function.
  16951 Example:
  16952 
  16953 \code
  16954 VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
  16955 bufCreateInfo.size = sizeof(ConstantBuffer);
  16956 bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
  16957 
  16958 VmaAllocationCreateInfo allocCreateInfo = {};
  16959 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
  16960 allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
  16961     VMA_ALLOCATION_CREATE_MAPPED_BIT;
  16962 
  16963 VkBuffer buf;
  16964 VmaAllocation alloc;
  16965 VmaAllocationInfo allocInfo;
  16966 vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
  16967 
  16968 // Buffer is already mapped. You can access its memory.
  16969 memcpy(allocInfo.pMappedData, &constantBufferData, sizeof(constantBufferData));
  16970 \endcode
  16971 
  16972 \note #VMA_ALLOCATION_CREATE_MAPPED_BIT by itself doesn't guarantee that the allocation will end up
  16973 in a mappable memory type.
  16974 For this, you need to also specify #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT or
  16975 #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
  16976 #VMA_ALLOCATION_CREATE_MAPPED_BIT only guarantees that if the memory is `HOST_VISIBLE`, the allocation will be mapped on creation.
  16977 For an example of how to make use of this fact, see section \ref usage_patterns_advanced_data_uploading.
  16978 
  16979 \section memory_mapping_cache_control Cache flush and invalidate
  16980 
  16981 Memory in Vulkan doesn't need to be unmapped before using it on GPU,
  16982 but unless a memory types has `VK_MEMORY_PROPERTY_HOST_COHERENT_BIT` flag set,
  16983 you need to manually **invalidate** cache before reading of mapped pointer
  16984 and **flush** cache after writing to mapped pointer.
  16985 Map/unmap operations don't do that automatically.
  16986 Vulkan provides following functions for this purpose `vkFlushMappedMemoryRanges()`,
  16987 `vkInvalidateMappedMemoryRanges()`, but this library provides more convenient
  16988 functions that refer to given allocation object: vmaFlushAllocation(),
  16989 vmaInvalidateAllocation(),
  16990 or multiple objects at once: vmaFlushAllocations(), vmaInvalidateAllocations().
  16991 
  16992 Regions of memory specified for flush/invalidate must be aligned to
  16993 `VkPhysicalDeviceLimits::nonCoherentAtomSize`. This is automatically ensured by the library.
  16994 In any memory type that is `HOST_VISIBLE` but not `HOST_COHERENT`, all allocations
  16995 within blocks are aligned to this value, so their offsets are always multiply of
  16996 `nonCoherentAtomSize` and two different allocations never share same "line" of this size.
  16997 
  16998 Also, Windows drivers from all 3 PC GPU vendors (AMD, Intel, NVIDIA)
  16999 currently provide `HOST_COHERENT` flag on all memory types that are
  17000 `HOST_VISIBLE`, so on PC you may not need to bother.
  17001 
  17002 
  17003 \page staying_within_budget Staying within budget
  17004 
  17005 When developing a graphics-intensive game or program, it is important to avoid allocating
  17006 more GPU memory than it is physically available. When the memory is over-committed,
  17007 various bad things can happen, depending on the specific GPU, graphics driver, and
  17008 operating system:
  17009 
  17010 - It may just work without any problems.
  17011 - The application may slow down because some memory blocks are moved to system RAM
  17012   and the GPU has to access them through PCI Express bus.
  17013 - A new allocation may take very long time to complete, even few seconds, and possibly
  17014   freeze entire system.
  17015 - The new allocation may fail with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
  17016 - It may even result in GPU crash (TDR), observed as `VK_ERROR_DEVICE_LOST`
  17017   returned somewhere later.
  17018 
  17019 \section staying_within_budget_querying_for_budget Querying for budget
  17020 
  17021 To query for current memory usage and available budget, use function vmaGetHeapBudgets().
  17022 Returned structure #VmaBudget contains quantities expressed in bytes, per Vulkan memory heap.
  17023 
  17024 Please note that this function returns different information and works faster than
  17025 vmaCalculateStatistics(). vmaGetHeapBudgets() can be called every frame or even before every
  17026 allocation, while vmaCalculateStatistics() is intended to be used rarely,
  17027 only to obtain statistical information, e.g. for debugging purposes.
  17028 
  17029 It is recommended to use <b>VK_EXT_memory_budget</b> device extension to obtain information
  17030 about the budget from Vulkan device. VMA is able to use this extension automatically.
  17031 When not enabled, the allocator behaves same way, but then it estimates current usage
  17032 and available budget based on its internal information and Vulkan memory heap sizes,
  17033 which may be less precise. In order to use this extension:
  17034 
  17035 1. Make sure extensions VK_EXT_memory_budget and VK_KHR_get_physical_device_properties2
  17036    required by it are available and enable them. Please note that the first is a device
  17037    extension and the second is instance extension!
  17038 2. Use flag #VMA_ALLOCATOR_CREATE_EXT_MEMORY_BUDGET_BIT when creating #VmaAllocator object.
  17039 3. Make sure to call vmaSetCurrentFrameIndex() every frame. Budget is queried from
  17040    Vulkan inside of it to avoid overhead of querying it with every allocation.
  17041 
  17042 \section staying_within_budget_controlling_memory_usage Controlling memory usage
  17043 
  17044 There are many ways in which you can try to stay within the budget.
  17045 
  17046 First, when making new allocation requires allocating a new memory block, the library
  17047 tries not to exceed the budget automatically. If a block with default recommended size
  17048 (e.g. 256 MB) would go over budget, a smaller block is allocated, possibly even
  17049 dedicated memory for just this resource.
  17050 
  17051 If the size of the requested resource plus current memory usage is more than the
  17052 budget, by default the library still tries to create it, leaving it to the Vulkan
  17053 implementation whether the allocation succeeds or fails. You can change this behavior
  17054 by using #VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT flag. With it, the allocation is
  17055 not made if it would exceed the budget or if the budget is already exceeded.
  17056 VMA then tries to make the allocation from the next eligible Vulkan memory type.
  17057 The all of them fail, the call then fails with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
  17058 Example usage pattern may be to pass the #VMA_ALLOCATION_CREATE_WITHIN_BUDGET_BIT flag
  17059 when creating resources that are not essential for the application (e.g. the texture
  17060 of a specific object) and not to pass it when creating critically important resources
  17061 (e.g. render targets).
  17062 
  17063 On AMD graphics cards there is a custom vendor extension available: <b>VK_AMD_memory_overallocation_behavior</b>
  17064 that allows to control the behavior of the Vulkan implementation in out-of-memory cases -
  17065 whether it should fail with an error code or still allow the allocation.
  17066 Usage of this extension involves only passing extra structure on Vulkan device creation,
  17067 so it is out of scope of this library.
  17068 
  17069 Finally, you can also use #VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT flag to make sure
  17070 a new allocation is created only when it fits inside one of the existing memory blocks.
  17071 If it would require to allocate a new block, if fails instead with `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
  17072 This also ensures that the function call is very fast because it never goes to Vulkan
  17073 to obtain a new block.
  17074 
  17075 \note Creating \ref custom_memory_pools with VmaPoolCreateInfo::minBlockCount
  17076 set to more than 0 will currently try to allocate memory blocks without checking whether they
  17077 fit within budget.
  17078 
  17079 
  17080 \page resource_aliasing Resource aliasing (overlap)
  17081 
  17082 New explicit graphics APIs (Vulkan and Direct3D 12), thanks to manual memory
  17083 management, give an opportunity to alias (overlap) multiple resources in the
  17084 same region of memory - a feature not available in the old APIs (Direct3D 11, OpenGL).
  17085 It can be useful to save video memory, but it must be used with caution.
  17086 
  17087 For example, if you know the flow of your whole render frame in advance, you
  17088 are going to use some intermediate textures or buffers only during a small range of render passes,
  17089 and you know these ranges don't overlap in time, you can bind these resources to
  17090 the same place in memory, even if they have completely different parameters (width, height, format etc.).
  17091 
  17092 ![Resource aliasing (overlap)](../gfx/Aliasing.png)
  17093 
  17094 Such scenario is possible using VMA, but you need to create your images manually.
  17095 Then you need to calculate parameters of an allocation to be made using formula:
  17096 
  17097 - allocation size = max(size of each image)
  17098 - allocation alignment = max(alignment of each image)
  17099 - allocation memoryTypeBits = bitwise AND(memoryTypeBits of each image)
  17100 
  17101 Following example shows two different images bound to the same place in memory,
  17102 allocated to fit largest of them.
  17103 
  17104 \code
  17105 // A 512x512 texture to be sampled.
  17106 VkImageCreateInfo img1CreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
  17107 img1CreateInfo.imageType = VK_IMAGE_TYPE_2D;
  17108 img1CreateInfo.extent.width = 512;
  17109 img1CreateInfo.extent.height = 512;
  17110 img1CreateInfo.extent.depth = 1;
  17111 img1CreateInfo.mipLevels = 10;
  17112 img1CreateInfo.arrayLayers = 1;
  17113 img1CreateInfo.format = VK_FORMAT_R8G8B8A8_SRGB;
  17114 img1CreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
  17115 img1CreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
  17116 img1CreateInfo.usage = VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_SAMPLED_BIT;
  17117 img1CreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
  17118 
  17119 // A full screen texture to be used as color attachment.
  17120 VkImageCreateInfo img2CreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
  17121 img2CreateInfo.imageType = VK_IMAGE_TYPE_2D;
  17122 img2CreateInfo.extent.width = 1920;
  17123 img2CreateInfo.extent.height = 1080;
  17124 img2CreateInfo.extent.depth = 1;
  17125 img2CreateInfo.mipLevels = 1;
  17126 img2CreateInfo.arrayLayers = 1;
  17127 img2CreateInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
  17128 img2CreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
  17129 img2CreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
  17130 img2CreateInfo.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
  17131 img2CreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
  17132 
  17133 VkImage img1;
  17134 res = vkCreateImage(device, &img1CreateInfo, nullptr, &img1);
  17135 VkImage img2;
  17136 res = vkCreateImage(device, &img2CreateInfo, nullptr, &img2);
  17137 
  17138 VkMemoryRequirements img1MemReq;
  17139 vkGetImageMemoryRequirements(device, img1, &img1MemReq);
  17140 VkMemoryRequirements img2MemReq;
  17141 vkGetImageMemoryRequirements(device, img2, &img2MemReq);
  17142 
  17143 VkMemoryRequirements finalMemReq = {};
  17144 finalMemReq.size = std::max(img1MemReq.size, img2MemReq.size);
  17145 finalMemReq.alignment = std::max(img1MemReq.alignment, img2MemReq.alignment);
  17146 finalMemReq.memoryTypeBits = img1MemReq.memoryTypeBits & img2MemReq.memoryTypeBits;
  17147 // Validate if(finalMemReq.memoryTypeBits != 0)
  17148 
  17149 VmaAllocationCreateInfo allocCreateInfo = {};
  17150 allocCreateInfo.preferredFlags = VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT;
  17151 
  17152 VmaAllocation alloc;
  17153 res = vmaAllocateMemory(allocator, &finalMemReq, &allocCreateInfo, &alloc, nullptr);
  17154 
  17155 res = vmaBindImageMemory(allocator, alloc, img1);
  17156 res = vmaBindImageMemory(allocator, alloc, img2);
  17157 
  17158 // You can use img1, img2 here, but not at the same time!
  17159 
  17160 vmaFreeMemory(allocator, alloc);
  17161 vkDestroyImage(allocator, img2, nullptr);
  17162 vkDestroyImage(allocator, img1, nullptr);
  17163 \endcode
  17164 
  17165 VMA also provides convenience functions that create a buffer or image and bind it to memory
  17166 represented by an existing #VmaAllocation:
  17167 vmaCreateAliasingBuffer(), vmaCreateAliasingBuffer2(),
  17168 vmaCreateAliasingImage(), vmaCreateAliasingImage2().
  17169 Versions with "2" offer additional parameter `allocationLocalOffset`.
  17170 
  17171 Remember that using resources that alias in memory requires proper synchronization.
  17172 You need to issue a memory barrier to make sure commands that use `img1` and `img2`
  17173 don't overlap on GPU timeline.
  17174 You also need to treat a resource after aliasing as uninitialized - containing garbage data.
  17175 For example, if you use `img1` and then want to use `img2`, you need to issue
  17176 an image memory barrier for `img2` with `oldLayout` = `VK_IMAGE_LAYOUT_UNDEFINED`.
  17177 
  17178 Additional considerations:
  17179 
  17180 - Vulkan also allows to interpret contents of memory between aliasing resources consistently in some cases.
  17181 See chapter 11.8. "Memory Aliasing" of Vulkan specification or `VK_IMAGE_CREATE_ALIAS_BIT` flag.
  17182 - You can create more complex layout where different images and buffers are bound
  17183 at different offsets inside one large allocation. For example, one can imagine
  17184 a big texture used in some render passes, aliasing with a set of many small buffers
  17185 used between in some further passes. To bind a resource at non-zero offset in an allocation,
  17186 use vmaBindBufferMemory2() / vmaBindImageMemory2().
  17187 - Before allocating memory for the resources you want to alias, check `memoryTypeBits`
  17188 returned in memory requirements of each resource to make sure the bits overlap.
  17189 Some GPUs may expose multiple memory types suitable e.g. only for buffers or
  17190 images with `COLOR_ATTACHMENT` usage, so the sets of memory types supported by your
  17191 resources may be disjoint. Aliasing them is not possible in that case.
  17192 
  17193 
  17194 \page custom_memory_pools Custom memory pools
  17195 
  17196 A memory pool contains a number of `VkDeviceMemory` blocks.
  17197 The library automatically creates and manages default pool for each memory type available on the device.
  17198 Default memory pool automatically grows in size.
  17199 Size of allocated blocks is also variable and managed automatically.
  17200 You are using default pools whenever you leave VmaAllocationCreateInfo::pool = null.
  17201 
  17202 You can create custom pool and allocate memory out of it.
  17203 It can be useful if you want to:
  17204 
  17205 - Keep certain kind of allocations separate from others.
  17206 - Enforce particular, fixed size of Vulkan memory blocks.
  17207 - Limit maximum amount of Vulkan memory allocated for that pool.
  17208 - Reserve minimum or fixed amount of Vulkan memory always preallocated for that pool.
  17209 - Use extra parameters for a set of your allocations that are available in #VmaPoolCreateInfo but not in
  17210   #VmaAllocationCreateInfo - e.g., custom minimum alignment, custom `pNext` chain.
  17211 - Perform defragmentation on a specific subset of your allocations.
  17212 
  17213 To use custom memory pools:
  17214 
  17215 -# Fill VmaPoolCreateInfo structure.
  17216 -# Call vmaCreatePool() to obtain #VmaPool handle.
  17217 -# When making an allocation, set VmaAllocationCreateInfo::pool to this handle.
  17218    You don't need to specify any other parameters of this structure, like `usage`.
  17219 
  17220 Example:
  17221 
  17222 \code
  17223 // Find memoryTypeIndex for the pool.
  17224 VkBufferCreateInfo sampleBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
  17225 sampleBufCreateInfo.size = 0x10000; // Doesn't matter.
  17226 sampleBufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
  17227 
  17228 VmaAllocationCreateInfo sampleAllocCreateInfo = {};
  17229 sampleAllocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
  17230 
  17231 uint32_t memTypeIndex;
  17232 VkResult res = vmaFindMemoryTypeIndexForBufferInfo(allocator,
  17233     &sampleBufCreateInfo, &sampleAllocCreateInfo, &memTypeIndex);
  17234 // Check res...
  17235 
  17236 // Create a pool that can have at most 2 blocks, 128 MiB each.
  17237 VmaPoolCreateInfo poolCreateInfo = {};
  17238 poolCreateInfo.memoryTypeIndex = memTypeIndex;
  17239 poolCreateInfo.blockSize = 128ull * 1024 * 1024;
  17240 poolCreateInfo.maxBlockCount = 2;
  17241 
  17242 VmaPool pool;
  17243 res = vmaCreatePool(allocator, &poolCreateInfo, &pool);
  17244 // Check res...
  17245 
  17246 // Allocate a buffer out of it.
  17247 VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
  17248 bufCreateInfo.size = 1024;
  17249 bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
  17250 
  17251 VmaAllocationCreateInfo allocCreateInfo = {};
  17252 allocCreateInfo.pool = pool;
  17253 
  17254 VkBuffer buf;
  17255 VmaAllocation alloc;
  17256 res = vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, nullptr);
  17257 // Check res...
  17258 \endcode
  17259 
  17260 You have to free all allocations made from this pool before destroying it.
  17261 
  17262 \code
  17263 vmaDestroyBuffer(allocator, buf, alloc);
  17264 vmaDestroyPool(allocator, pool);
  17265 \endcode
  17266 
  17267 New versions of this library support creating dedicated allocations in custom pools.
  17268 It is supported only when VmaPoolCreateInfo::blockSize = 0.
  17269 To use this feature, set VmaAllocationCreateInfo::pool to the pointer to your custom pool and
  17270 VmaAllocationCreateInfo::flags to #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
  17271 
  17272 
  17273 \section custom_memory_pools_MemTypeIndex Choosing memory type index
  17274 
  17275 When creating a pool, you must explicitly specify memory type index.
  17276 To find the one suitable for your buffers or images, you can use helper functions
  17277 vmaFindMemoryTypeIndexForBufferInfo(), vmaFindMemoryTypeIndexForImageInfo().
  17278 You need to provide structures with example parameters of buffers or images
  17279 that you are going to create in that pool.
  17280 
  17281 \code
  17282 VkBufferCreateInfo exampleBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
  17283 exampleBufCreateInfo.size = 1024; // Doesn't matter
  17284 exampleBufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
  17285 
  17286 VmaAllocationCreateInfo allocCreateInfo = {};
  17287 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
  17288 
  17289 uint32_t memTypeIndex;
  17290 vmaFindMemoryTypeIndexForBufferInfo(allocator, &exampleBufCreateInfo, &allocCreateInfo, &memTypeIndex);
  17291 
  17292 VmaPoolCreateInfo poolCreateInfo = {};
  17293 poolCreateInfo.memoryTypeIndex = memTypeIndex;
  17294 // ...
  17295 \endcode
  17296 
  17297 When creating buffers/images allocated in that pool, provide following parameters:
  17298 
  17299 - `VkBufferCreateInfo`: Prefer to pass same parameters as above.
  17300   Otherwise you risk creating resources in a memory type that is not suitable for them, which may result in undefined behavior.
  17301   Using different `VK_BUFFER_USAGE_` flags may work, but you shouldn't create images in a pool intended for buffers
  17302   or the other way around.
  17303 - VmaAllocationCreateInfo: You don't need to pass same parameters. Fill only `pool` member.
  17304   Other members are ignored anyway.
  17305 
  17306 
  17307 \section custom_memory_pools_when_not_use When not to use custom pools
  17308 
  17309 Custom pools are commonly overused by VMA users.
  17310 While it may feel natural to keep some logical groups of resources separate in memory,
  17311 in most cases it does more harm than good.
  17312 Using custom pool shouldn't be your first choice.
  17313 Instead, please make all allocations from default pools first and only use custom pools
  17314 if you can prove and measure that it is beneficial in some way,
  17315 e.g. it results in lower memory usage, better performance, etc.
  17316 
  17317 Using custom pools has disadvantages:
  17318 
  17319 - Each pool has its own collection of `VkDeviceMemory` blocks.
  17320   Some of them may be partially or even completely empty.
  17321   Spreading allocations across multiple pools increases the amount of wasted (allocated but unbound) memory.
  17322 - You must manually choose specific memory type to be used by a custom pool (set as VmaPoolCreateInfo::memoryTypeIndex).
  17323   When using default pools, best memory type for each of your allocations can be selected automatically
  17324   using a carefully design algorithm that works across all kinds of GPUs.
  17325 - If an allocation from a custom pool at specific memory type fails, entire allocation operation returns failure.
  17326   When using default pools, VMA tries another compatible memory type.
  17327 - If you set VmaPoolCreateInfo::blockSize != 0, each memory block has the same size,
  17328   while default pools start from small blocks and only allocate next blocks larger and larger
  17329   up to the preferred block size.
  17330 
  17331 Many of the common concerns can be addressed in a different way than using custom pools:
  17332 
  17333 - If you want to keep your allocations of certain size (small versus large) or certain lifetime (transient versus long lived)
  17334   separate, you likely don't need to.
  17335   VMA uses a high quality allocation algorithm that manages memory well in various cases.
  17336   Please measure and check if using custom pools provides a benefit.
  17337 - If you want to keep your images and buffers separate, you don't need to.
  17338   VMA respects `bufferImageGranularity` limit automatically.
  17339 - If you want to keep your mapped and not mapped allocations separate, you don't need to.
  17340   VMA respects `nonCoherentAtomSize` limit automatically.
  17341   It also maps only those `VkDeviceMemory` blocks that need to map any allocation.
  17342   It even tries to keep mappable and non-mappable allocations in separate blocks to minimize the amount of mapped memory.
  17343 - If you want to choose a custom size for the default memory block, you can set it globally instead
  17344   using VmaAllocatorCreateInfo::preferredLargeHeapBlockSize.
  17345 - If you want to select specific memory type for your allocation,
  17346   you can set VmaAllocationCreateInfo::memoryTypeBits to `(1u << myMemoryTypeIndex)` instead.
  17347 - If you need to create a buffer with certain minimum alignment, you can still do it
  17348   using default pools with dedicated function vmaCreateBufferWithAlignment().
  17349 
  17350 
  17351 \section linear_algorithm Linear allocation algorithm
  17352 
  17353 Each Vulkan memory block managed by this library has accompanying metadata that
  17354 keeps track of used and unused regions. By default, the metadata structure and
  17355 algorithm tries to find best place for new allocations among free regions to
  17356 optimize memory usage. This way you can allocate and free objects in any order.
  17357 
  17358 ![Default allocation algorithm](../gfx/Linear_allocator_1_algo_default.png)
  17359 
  17360 Sometimes there is a need to use simpler, linear allocation algorithm. You can
  17361 create custom pool that uses such algorithm by adding flag
  17362 #VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT to VmaPoolCreateInfo::flags while creating
  17363 #VmaPool object. Then an alternative metadata management is used. It always
  17364 creates new allocations after last one and doesn't reuse free regions after
  17365 allocations freed in the middle. It results in better allocation performance and
  17366 less memory consumed by metadata.
  17367 
  17368 ![Linear allocation algorithm](../gfx/Linear_allocator_2_algo_linear.png)
  17369 
  17370 With this one flag, you can create a custom pool that can be used in many ways:
  17371 free-at-once, stack, double stack, and ring buffer. See below for details.
  17372 You don't need to specify explicitly which of these options you are going to use - it is detected automatically.
  17373 
  17374 \subsection linear_algorithm_free_at_once Free-at-once
  17375 
  17376 In a pool that uses linear algorithm, you still need to free all the allocations
  17377 individually, e.g. by using vmaFreeMemory() or vmaDestroyBuffer(). You can free
  17378 them in any order. New allocations are always made after last one - free space
  17379 in the middle is not reused. However, when you release all the allocation and
  17380 the pool becomes empty, allocation starts from the beginning again. This way you
  17381 can use linear algorithm to speed up creation of allocations that you are going
  17382 to release all at once.
  17383 
  17384 ![Free-at-once](../gfx/Linear_allocator_3_free_at_once.png)
  17385 
  17386 This mode is also available for pools created with VmaPoolCreateInfo::maxBlockCount
  17387 value that allows multiple memory blocks.
  17388 
  17389 \subsection linear_algorithm_stack Stack
  17390 
  17391 When you free an allocation that was created last, its space can be reused.
  17392 Thanks to this, if you always release allocations in the order opposite to their
  17393 creation (LIFO - Last In First Out), you can achieve behavior of a stack.
  17394 
  17395 ![Stack](../gfx/Linear_allocator_4_stack.png)
  17396 
  17397 This mode is also available for pools created with VmaPoolCreateInfo::maxBlockCount
  17398 value that allows multiple memory blocks.
  17399 
  17400 \subsection linear_algorithm_double_stack Double stack
  17401 
  17402 The space reserved by a custom pool with linear algorithm may be used by two
  17403 stacks:
  17404 
  17405 - First, default one, growing up from offset 0.
  17406 - Second, "upper" one, growing down from the end towards lower offsets.
  17407 
  17408 To make allocation from the upper stack, add flag #VMA_ALLOCATION_CREATE_UPPER_ADDRESS_BIT
  17409 to VmaAllocationCreateInfo::flags.
  17410 
  17411 ![Double stack](../gfx/Linear_allocator_7_double_stack.png)
  17412 
  17413 Double stack is available only in pools with one memory block -
  17414 VmaPoolCreateInfo::maxBlockCount must be 1. Otherwise behavior is undefined.
  17415 
  17416 When the two stacks' ends meet so there is not enough space between them for a
  17417 new allocation, such allocation fails with usual
  17418 `VK_ERROR_OUT_OF_DEVICE_MEMORY` error.
  17419 
  17420 \subsection linear_algorithm_ring_buffer Ring buffer
  17421 
  17422 When you free some allocations from the beginning and there is not enough free space
  17423 for a new one at the end of a pool, allocator's "cursor" wraps around to the
  17424 beginning and starts allocation there. Thanks to this, if you always release
  17425 allocations in the same order as you created them (FIFO - First In First Out),
  17426 you can achieve behavior of a ring buffer / queue.
  17427 
  17428 ![Ring buffer](../gfx/Linear_allocator_5_ring_buffer.png)
  17429 
  17430 Ring buffer is available only in pools with one memory block -
  17431 VmaPoolCreateInfo::maxBlockCount must be 1. Otherwise behavior is undefined.
  17432 
  17433 \note \ref defragmentation is not supported in custom pools created with #VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT.
  17434 
  17435 
  17436 \page defragmentation Defragmentation
  17437 
  17438 Interleaved allocations and deallocations of many objects of varying size can
  17439 cause fragmentation over time, which can lead to a situation where the library is unable
  17440 to find a continuous range of free memory for a new allocation despite there is
  17441 enough free space, just scattered across many small free ranges between existing
  17442 allocations.
  17443 
  17444 To mitigate this problem, you can use defragmentation feature.
  17445 It doesn't happen automatically though and needs your cooperation,
  17446 because VMA is a low level library that only allocates memory.
  17447 It cannot recreate buffers and images in a new place as it doesn't remember the contents of `VkBufferCreateInfo` / `VkImageCreateInfo` structures.
  17448 It cannot copy their contents as it doesn't record any commands to a command buffer.
  17449 
  17450 Example:
  17451 
  17452 \code
  17453 VmaDefragmentationInfo defragInfo = {};
  17454 defragInfo.pool = myPool;
  17455 defragInfo.flags = VMA_DEFRAGMENTATION_FLAG_ALGORITHM_FAST_BIT;
  17456 
  17457 VmaDefragmentationContext defragCtx;
  17458 VkResult res = vmaBeginDefragmentation(allocator, &defragInfo, &defragCtx);
  17459 // Check res...
  17460 
  17461 for(;;)
  17462 {
  17463     VmaDefragmentationPassMoveInfo pass;
  17464     res = vmaBeginDefragmentationPass(allocator, defragCtx, &pass);
  17465     if(res == VK_SUCCESS)
  17466         break;
  17467     else if(res != VK_INCOMPLETE)
  17468         // Handle error...
  17469 
  17470     for(uint32_t i = 0; i < pass.moveCount; ++i)
  17471     {
  17472         // Inspect pass.pMoves[i].srcAllocation, identify what buffer/image it represents.
  17473         VmaAllocationInfo allocInfo;
  17474         vmaGetAllocationInfo(allocator, pass.pMoves[i].srcAllocation, &allocInfo);
  17475         MyEngineResourceData* resData = (MyEngineResourceData*)allocInfo.pUserData;
  17476 
  17477         // Recreate and bind this buffer/image at: pass.pMoves[i].dstMemory, pass.pMoves[i].dstOffset.
  17478         VkImageCreateInfo imgCreateInfo = ...
  17479         VkImage newImg;
  17480         res = vkCreateImage(device, &imgCreateInfo, nullptr, &newImg);
  17481         // Check res...
  17482         res = vmaBindImageMemory(allocator, pass.pMoves[i].dstTmpAllocation, newImg);
  17483         // Check res...
  17484 
  17485         // Issue a vkCmdCopyBuffer/vkCmdCopyImage to copy its content to the new place.
  17486         vkCmdCopyImage(cmdBuf, resData->img, ..., newImg, ...);
  17487     }
  17488 
  17489     // Make sure the copy commands finished executing.
  17490     vkWaitForFences(...);
  17491 
  17492     // Destroy old buffers/images bound with pass.pMoves[i].srcAllocation.
  17493     for(uint32_t i = 0; i < pass.moveCount; ++i)
  17494     {
  17495         // ...
  17496         vkDestroyImage(device, resData->img, nullptr);
  17497     }
  17498 
  17499     // Update appropriate descriptors to point to the new places...
  17500 
  17501     res = vmaEndDefragmentationPass(allocator, defragCtx, &pass);
  17502     if(res == VK_SUCCESS)
  17503         break;
  17504     else if(res != VK_INCOMPLETE)
  17505         // Handle error...
  17506 }
  17507 
  17508 vmaEndDefragmentation(allocator, defragCtx, nullptr);
  17509 \endcode
  17510 
  17511 Although functions like vmaCreateBuffer(), vmaCreateImage(), vmaDestroyBuffer(), vmaDestroyImage()
  17512 create/destroy an allocation and a buffer/image at once, these are just a shortcut for
  17513 creating the resource, allocating memory, and binding them together.
  17514 Defragmentation works on memory allocations only. You must handle the rest manually.
  17515 Defragmentation is an iterative process that should repreat "passes" as long as related functions
  17516 return `VK_INCOMPLETE` not `VK_SUCCESS`.
  17517 In each pass:
  17518 
  17519 1. vmaBeginDefragmentationPass() function call:
  17520    - Calculates and returns the list of allocations to be moved in this pass.
  17521      Note this can be a time-consuming process.
  17522    - Reserves destination memory for them by creating temporary destination allocations
  17523      that you can query for their `VkDeviceMemory` + offset using vmaGetAllocationInfo().
  17524 2. Inside the pass, **you should**:
  17525    - Inspect the returned list of allocations to be moved.
  17526    - Create new buffers/images and bind them at the returned destination temporary allocations.
  17527    - Copy data from source to destination resources if necessary.
  17528    - Destroy the source buffers/images, but NOT their allocations.
  17529 3. vmaEndDefragmentationPass() function call:
  17530    - Frees the source memory reserved for the allocations that are moved.
  17531    - Modifies source #VmaAllocation objects that are moved to point to the destination reserved memory.
  17532    - Frees `VkDeviceMemory` blocks that became empty.
  17533 
  17534 Unlike in previous iterations of the defragmentation API, there is no list of "movable" allocations passed as a parameter.
  17535 Defragmentation algorithm tries to move all suitable allocations.
  17536 You can, however, refuse to move some of them inside a defragmentation pass, by setting
  17537 `pass.pMoves[i].operation` to #VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE.
  17538 This is not recommended and may result in suboptimal packing of the allocations after defragmentation.
  17539 If you cannot ensure any allocation can be moved, it is better to keep movable allocations separate in a custom pool.
  17540 
  17541 Inside a pass, for each allocation that should be moved:
  17542 
  17543 - You should copy its data from the source to the destination place by calling e.g. `vkCmdCopyBuffer()`, `vkCmdCopyImage()`.
  17544   - You need to make sure these commands finished executing before destroying the source buffers/images and before calling vmaEndDefragmentationPass().
  17545 - If a resource doesn't contain any meaningful data, e.g. it is a transient color attachment image to be cleared,
  17546   filled, and used temporarily in each rendering frame, you can just recreate this image
  17547   without copying its data.
  17548 - If the resource is in `HOST_VISIBLE` and `HOST_CACHED` memory, you can copy its data on the CPU
  17549   using `memcpy()`.
  17550 - If you cannot move the allocation, you can set `pass.pMoves[i].operation` to #VMA_DEFRAGMENTATION_MOVE_OPERATION_IGNORE.
  17551   This will cancel the move.
  17552   - vmaEndDefragmentationPass() will then free the destination memory
  17553     not the source memory of the allocation, leaving it unchanged.
  17554 - If you decide the allocation is unimportant and can be destroyed instead of moved (e.g. it wasn't used for long time),
  17555   you can set `pass.pMoves[i].operation` to #VMA_DEFRAGMENTATION_MOVE_OPERATION_DESTROY.
  17556   - vmaEndDefragmentationPass() will then free both source and destination memory, and will destroy the source #VmaAllocation object.
  17557 
  17558 You can defragment a specific custom pool by setting VmaDefragmentationInfo::pool
  17559 (like in the example above) or all the default pools by setting this member to null.
  17560 
  17561 Defragmentation is always performed in each pool separately.
  17562 Allocations are never moved between different Vulkan memory types.
  17563 The size of the destination memory reserved for a moved allocation is the same as the original one.
  17564 Alignment of an allocation as it was determined using `vkGetBufferMemoryRequirements()` etc. is also respected after defragmentation.
  17565 Buffers/images should be recreated with the same `VkBufferCreateInfo` / `VkImageCreateInfo` parameters as the original ones.
  17566 
  17567 You can perform the defragmentation incrementally to limit the number of allocations and bytes to be moved
  17568 in each pass, e.g. to call it in sync with render frames and not to experience too big hitches.
  17569 See members: VmaDefragmentationInfo::maxBytesPerPass, VmaDefragmentationInfo::maxAllocationsPerPass.
  17570 
  17571 It is also safe to perform the defragmentation asynchronously to render frames and other Vulkan and VMA
  17572 usage, possibly from multiple threads, with the exception that allocations
  17573 returned in VmaDefragmentationPassMoveInfo::pMoves shouldn't be destroyed until the defragmentation pass is ended.
  17574 
  17575 <b>Mapping</b> is preserved on allocations that are moved during defragmentation.
  17576 Whether through #VMA_ALLOCATION_CREATE_MAPPED_BIT or vmaMapMemory(), the allocations
  17577 are mapped at their new place. Of course, pointer to the mapped data changes, so it needs to be queried
  17578 using VmaAllocationInfo::pMappedData.
  17579 
  17580 \note Defragmentation is not supported in custom pools created with #VMA_POOL_CREATE_LINEAR_ALGORITHM_BIT.
  17581 
  17582 
  17583 \page statistics Statistics
  17584 
  17585 This library contains several functions that return information about its internal state,
  17586 especially the amount of memory allocated from Vulkan.
  17587 
  17588 \section statistics_numeric_statistics Numeric statistics
  17589 
  17590 If you need to obtain basic statistics about memory usage per heap, together with current budget,
  17591 you can call function vmaGetHeapBudgets() and inspect structure #VmaBudget.
  17592 This is useful to keep track of memory usage and stay within budget
  17593 (see also \ref staying_within_budget).
  17594 Example:
  17595 
  17596 \code
  17597 uint32_t heapIndex = ...
  17598 
  17599 VmaBudget budgets[VK_MAX_MEMORY_HEAPS];
  17600 vmaGetHeapBudgets(allocator, budgets);
  17601 
  17602 printf("My heap currently has %u allocations taking %llu B,\n",
  17603     budgets[heapIndex].statistics.allocationCount,
  17604     budgets[heapIndex].statistics.allocationBytes);
  17605 printf("allocated out of %u Vulkan device memory blocks taking %llu B,\n",
  17606     budgets[heapIndex].statistics.blockCount,
  17607     budgets[heapIndex].statistics.blockBytes);
  17608 printf("Vulkan reports total usage %llu B with budget %llu B.\n",
  17609     budgets[heapIndex].usage,
  17610     budgets[heapIndex].budget);
  17611 \endcode
  17612 
  17613 You can query for more detailed statistics per memory heap, type, and totals,
  17614 including minimum and maximum allocation size and unused range size,
  17615 by calling function vmaCalculateStatistics() and inspecting structure #VmaTotalStatistics.
  17616 This function is slower though, as it has to traverse all the internal data structures,
  17617 so it should be used only for debugging purposes.
  17618 
  17619 You can query for statistics of a custom pool using function vmaGetPoolStatistics()
  17620 or vmaCalculatePoolStatistics().
  17621 
  17622 You can query for information about a specific allocation using function vmaGetAllocationInfo().
  17623 It fill structure #VmaAllocationInfo.
  17624 
  17625 \section statistics_json_dump JSON dump
  17626 
  17627 You can dump internal state of the allocator to a string in JSON format using function vmaBuildStatsString().
  17628 The result is guaranteed to be correct JSON.
  17629 It uses ANSI encoding.
  17630 Any strings provided by user (see [Allocation names](@ref allocation_names))
  17631 are copied as-is and properly escaped for JSON, so if they use UTF-8, ISO-8859-2 or any other encoding,
  17632 this JSON string can be treated as using this encoding.
  17633 It must be freed using function vmaFreeStatsString().
  17634 
  17635 The format of this JSON string is not part of official documentation of the library,
  17636 but it will not change in backward-incompatible way without increasing library major version number
  17637 and appropriate mention in changelog.
  17638 
  17639 The JSON string contains all the data that can be obtained using vmaCalculateStatistics().
  17640 It can also contain detailed map of allocated memory blocks and their regions -
  17641 free and occupied by allocations.
  17642 This allows e.g. to visualize the memory or assess fragmentation.
  17643 
  17644 
  17645 \page allocation_annotation Allocation names and user data
  17646 
  17647 \section allocation_user_data Allocation user data
  17648 
  17649 You can annotate allocations with your own information, e.g. for debugging purposes.
  17650 To do that, fill VmaAllocationCreateInfo::pUserData field when creating
  17651 an allocation. It is an opaque `void*` pointer. You can use it e.g. as a pointer,
  17652 some handle, index, key, ordinal number or any other value that would associate
  17653 the allocation with your custom metadata.
  17654 It is useful to identify appropriate data structures in your engine given #VmaAllocation,
  17655 e.g. when doing \ref defragmentation.
  17656 
  17657 \code
  17658 VkBufferCreateInfo bufCreateInfo = ...
  17659 
  17660 MyBufferMetadata* pMetadata = CreateBufferMetadata();
  17661 
  17662 VmaAllocationCreateInfo allocCreateInfo = {};
  17663 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
  17664 allocCreateInfo.pUserData = pMetadata;
  17665 
  17666 VkBuffer buffer;
  17667 VmaAllocation allocation;
  17668 vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buffer, &allocation, nullptr);
  17669 \endcode
  17670 
  17671 The pointer may be later retrieved as VmaAllocationInfo::pUserData:
  17672 
  17673 \code
  17674 VmaAllocationInfo allocInfo;
  17675 vmaGetAllocationInfo(allocator, allocation, &allocInfo);
  17676 MyBufferMetadata* pMetadata = (MyBufferMetadata*)allocInfo.pUserData;
  17677 \endcode
  17678 
  17679 It can also be changed using function vmaSetAllocationUserData().
  17680 
  17681 Values of (non-zero) allocations' `pUserData` are printed in JSON report created by
  17682 vmaBuildStatsString() in hexadecimal form.
  17683 
  17684 \section allocation_names Allocation names
  17685 
  17686 An allocation can also carry a null-terminated string, giving a name to the allocation.
  17687 To set it, call vmaSetAllocationName().
  17688 The library creates internal copy of the string, so the pointer you pass doesn't need
  17689 to be valid for whole lifetime of the allocation. You can free it after the call.
  17690 
  17691 \code
  17692 std::string imageName = "Texture: ";
  17693 imageName += fileName;
  17694 vmaSetAllocationName(allocator, allocation, imageName.c_str());
  17695 \endcode
  17696 
  17697 The string can be later retrieved by inspecting VmaAllocationInfo::pName.
  17698 It is also printed in JSON report created by vmaBuildStatsString().
  17699 
  17700 \note Setting string name to VMA allocation doesn't automatically set it to the Vulkan buffer or image created with it.
  17701 You must do it manually using an extension like VK_EXT_debug_utils, which is independent of this library.
  17702 
  17703 
  17704 \page virtual_allocator Virtual allocator
  17705 
  17706 As an extra feature, the core allocation algorithm of the library is exposed through a simple and convenient API of "virtual allocator".
  17707 It doesn't allocate any real GPU memory. It just keeps track of used and free regions of a "virtual block".
  17708 You can use it to allocate your own memory or other objects, even completely unrelated to Vulkan.
  17709 A common use case is sub-allocation of pieces of one large GPU buffer.
  17710 
  17711 \section virtual_allocator_creating_virtual_block Creating virtual block
  17712 
  17713 To use this functionality, there is no main "allocator" object.
  17714 You don't need to have #VmaAllocator object created.
  17715 All you need to do is to create a separate #VmaVirtualBlock object for each block of memory you want to be managed by the allocator:
  17716 
  17717 -# Fill in #VmaVirtualBlockCreateInfo structure.
  17718 -# Call vmaCreateVirtualBlock(). Get new #VmaVirtualBlock object.
  17719 
  17720 Example:
  17721 
  17722 \code
  17723 VmaVirtualBlockCreateInfo blockCreateInfo = {};
  17724 blockCreateInfo.size = 1048576; // 1 MB
  17725 
  17726 VmaVirtualBlock block;
  17727 VkResult res = vmaCreateVirtualBlock(&blockCreateInfo, &block);
  17728 \endcode
  17729 
  17730 \section virtual_allocator_making_virtual_allocations Making virtual allocations
  17731 
  17732 #VmaVirtualBlock object contains internal data structure that keeps track of free and occupied regions
  17733 using the same code as the main Vulkan memory allocator.
  17734 Similarly to #VmaAllocation for standard GPU allocations, there is #VmaVirtualAllocation type
  17735 that represents an opaque handle to an allocation within the virtual block.
  17736 
  17737 In order to make such allocation:
  17738 
  17739 -# Fill in #VmaVirtualAllocationCreateInfo structure.
  17740 -# Call vmaVirtualAllocate(). Get new #VmaVirtualAllocation object that represents the allocation.
  17741    You can also receive `VkDeviceSize offset` that was assigned to the allocation.
  17742 
  17743 Example:
  17744 
  17745 \code
  17746 VmaVirtualAllocationCreateInfo allocCreateInfo = {};
  17747 allocCreateInfo.size = 4096; // 4 KB
  17748 
  17749 VmaVirtualAllocation alloc;
  17750 VkDeviceSize offset;
  17751 res = vmaVirtualAllocate(block, &allocCreateInfo, &alloc, &offset);
  17752 if(res == VK_SUCCESS)
  17753 {
  17754     // Use the 4 KB of your memory starting at offset.
  17755 }
  17756 else
  17757 {
  17758     // Allocation failed - no space for it could be found. Handle this error!
  17759 }
  17760 \endcode
  17761 
  17762 \section virtual_allocator_deallocation Deallocation
  17763 
  17764 When no longer needed, an allocation can be freed by calling vmaVirtualFree().
  17765 You can only pass to this function an allocation that was previously returned by vmaVirtualAllocate()
  17766 called for the same #VmaVirtualBlock.
  17767 
  17768 When whole block is no longer needed, the block object can be released by calling vmaDestroyVirtualBlock().
  17769 All allocations must be freed before the block is destroyed, which is checked internally by an assert.
  17770 However, if you don't want to call vmaVirtualFree() for each allocation, you can use vmaClearVirtualBlock() to free them all at once -
  17771 a feature not available in normal Vulkan memory allocator. Example:
  17772 
  17773 \code
  17774 vmaVirtualFree(block, alloc);
  17775 vmaDestroyVirtualBlock(block);
  17776 \endcode
  17777 
  17778 \section virtual_allocator_allocation_parameters Allocation parameters
  17779 
  17780 You can attach a custom pointer to each allocation by using vmaSetVirtualAllocationUserData().
  17781 Its default value is null.
  17782 It can be used to store any data that needs to be associated with that allocation - e.g. an index, a handle, or a pointer to some
  17783 larger data structure containing more information. Example:
  17784 
  17785 \code
  17786 struct CustomAllocData
  17787 {
  17788     std::string m_AllocName;
  17789 };
  17790 CustomAllocData* allocData = new CustomAllocData();
  17791 allocData->m_AllocName = "My allocation 1";
  17792 vmaSetVirtualAllocationUserData(block, alloc, allocData);
  17793 \endcode
  17794 
  17795 The pointer can later be fetched, along with allocation offset and size, by passing the allocation handle to function
  17796 vmaGetVirtualAllocationInfo() and inspecting returned structure #VmaVirtualAllocationInfo.
  17797 If you allocated a new object to be used as the custom pointer, don't forget to delete that object before freeing the allocation!
  17798 Example:
  17799 
  17800 \code
  17801 VmaVirtualAllocationInfo allocInfo;
  17802 vmaGetVirtualAllocationInfo(block, alloc, &allocInfo);
  17803 delete (CustomAllocData*)allocInfo.pUserData;
  17804 
  17805 vmaVirtualFree(block, alloc);
  17806 \endcode
  17807 
  17808 \section virtual_allocator_alignment_and_units Alignment and units
  17809 
  17810 It feels natural to express sizes and offsets in bytes.
  17811 If an offset of an allocation needs to be aligned to a multiply of some number (e.g. 4 bytes), you can fill optional member
  17812 VmaVirtualAllocationCreateInfo::alignment to request it. Example:
  17813 
  17814 \code
  17815 VmaVirtualAllocationCreateInfo allocCreateInfo = {};
  17816 allocCreateInfo.size = 4096; // 4 KB
  17817 allocCreateInfo.alignment = 4; // Returned offset must be a multiply of 4 B
  17818 
  17819 VmaVirtualAllocation alloc;
  17820 res = vmaVirtualAllocate(block, &allocCreateInfo, &alloc, nullptr);
  17821 \endcode
  17822 
  17823 Alignments of different allocations made from one block may vary.
  17824 However, if all alignments and sizes are always multiply of some size e.g. 4 B or `sizeof(MyDataStruct)`,
  17825 you can express all sizes, alignments, and offsets in multiples of that size instead of individual bytes.
  17826 It might be more convenient, but you need to make sure to use this new unit consistently in all the places:
  17827 
  17828 - VmaVirtualBlockCreateInfo::size
  17829 - VmaVirtualAllocationCreateInfo::size and VmaVirtualAllocationCreateInfo::alignment
  17830 - Using offset returned by vmaVirtualAllocate() or in VmaVirtualAllocationInfo::offset
  17831 
  17832 \section virtual_allocator_statistics Statistics
  17833 
  17834 You can obtain statistics of a virtual block using vmaGetVirtualBlockStatistics()
  17835 (to get brief statistics that are fast to calculate)
  17836 or vmaCalculateVirtualBlockStatistics() (to get more detailed statistics, slower to calculate).
  17837 The functions fill structures #VmaStatistics, #VmaDetailedStatistics respectively - same as used by the normal Vulkan memory allocator.
  17838 Example:
  17839 
  17840 \code
  17841 VmaStatistics stats;
  17842 vmaGetVirtualBlockStatistics(block, &stats);
  17843 printf("My virtual block has %llu bytes used by %u virtual allocations\n",
  17844     stats.allocationBytes, stats.allocationCount);
  17845 \endcode
  17846 
  17847 You can also request a full list of allocations and free regions as a string in JSON format by calling
  17848 vmaBuildVirtualBlockStatsString().
  17849 Returned string must be later freed using vmaFreeVirtualBlockStatsString().
  17850 The format of this string differs from the one returned by the main Vulkan allocator, but it is similar.
  17851 
  17852 \section virtual_allocator_additional_considerations Additional considerations
  17853 
  17854 The "virtual allocator" functionality is implemented on a level of individual memory blocks.
  17855 Keeping track of a whole collection of blocks, allocating new ones when out of free space,
  17856 deleting empty ones, and deciding which one to try first for a new allocation must be implemented by the user.
  17857 
  17858 Alternative allocation algorithms are supported, just like in custom pools of the real GPU memory.
  17859 See enum #VmaVirtualBlockCreateFlagBits to learn how to specify them (e.g. #VMA_VIRTUAL_BLOCK_CREATE_LINEAR_ALGORITHM_BIT).
  17860 You can find their description in chapter \ref custom_memory_pools.
  17861 Allocation strategies are also supported.
  17862 See enum #VmaVirtualAllocationCreateFlagBits to learn how to specify them (e.g. #VMA_VIRTUAL_ALLOCATION_CREATE_STRATEGY_MIN_TIME_BIT).
  17863 
  17864 Following features are supported only by the allocator of the real GPU memory and not by virtual allocations:
  17865 buffer-image granularity, `VMA_DEBUG_MARGIN`, `VMA_MIN_ALIGNMENT`.
  17866 
  17867 
  17868 \page debugging_memory_usage Debugging incorrect memory usage
  17869 
  17870 If you suspect a bug with memory usage, like usage of uninitialized memory or
  17871 memory being overwritten out of bounds of an allocation,
  17872 you can use debug features of this library to verify this.
  17873 
  17874 \section debugging_memory_usage_initialization Memory initialization
  17875 
  17876 If you experience a bug with incorrect and nondeterministic data in your program and you suspect uninitialized memory to be used,
  17877 you can enable automatic memory initialization to verify this.
  17878 To do it, define macro `VMA_DEBUG_INITIALIZE_ALLOCATIONS` to 1.
  17879 
  17880 \code
  17881 #define VMA_DEBUG_INITIALIZE_ALLOCATIONS 1
  17882 #include "vk_mem_alloc.h"
  17883 \endcode
  17884 
  17885 It makes memory of new allocations initialized to bit pattern `0xDCDCDCDC`.
  17886 Before an allocation is destroyed, its memory is filled with bit pattern `0xEFEFEFEF`.
  17887 Memory is automatically mapped and unmapped if necessary.
  17888 
  17889 If you find these values while debugging your program, good chances are that you incorrectly
  17890 read Vulkan memory that is allocated but not initialized, or already freed, respectively.
  17891 
  17892 Memory initialization works only with memory types that are `HOST_VISIBLE` and with allocations that can be mapped.
  17893 It works also with dedicated allocations.
  17894 
  17895 \section debugging_memory_usage_margins Margins
  17896 
  17897 By default, allocations are laid out in memory blocks next to each other if possible
  17898 (considering required alignment, `bufferImageGranularity`, and `nonCoherentAtomSize`).
  17899 
  17900 ![Allocations without margin](../gfx/Margins_1.png)
  17901 
  17902 Define macro `VMA_DEBUG_MARGIN` to some non-zero value (e.g. 16) to enforce specified
  17903 number of bytes as a margin after every allocation.
  17904 
  17905 \code
  17906 #define VMA_DEBUG_MARGIN 16
  17907 #include "vk_mem_alloc.h"
  17908 \endcode
  17909 
  17910 ![Allocations with margin](../gfx/Margins_2.png)
  17911 
  17912 If your bug goes away after enabling margins, it means it may be caused by memory
  17913 being overwritten outside of allocation boundaries. It is not 100% certain though.
  17914 Change in application behavior may also be caused by different order and distribution
  17915 of allocations across memory blocks after margins are applied.
  17916 
  17917 Margins work with all types of memory.
  17918 
  17919 Margin is applied only to allocations made out of memory blocks and not to dedicated
  17920 allocations, which have their own memory block of specific size.
  17921 It is thus not applied to allocations made using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT flag
  17922 or those automatically decided to put into dedicated allocations, e.g. due to its
  17923 large size or recommended by VK_KHR_dedicated_allocation extension.
  17924 
  17925 Margins appear in [JSON dump](@ref statistics_json_dump) as part of free space.
  17926 
  17927 Note that enabling margins increases memory usage and fragmentation.
  17928 
  17929 Margins do not apply to \ref virtual_allocator.
  17930 
  17931 \section debugging_memory_usage_corruption_detection Corruption detection
  17932 
  17933 You can additionally define macro `VMA_DEBUG_DETECT_CORRUPTION` to 1 to enable validation
  17934 of contents of the margins.
  17935 
  17936 \code
  17937 #define VMA_DEBUG_MARGIN 16
  17938 #define VMA_DEBUG_DETECT_CORRUPTION 1
  17939 #include "vk_mem_alloc.h"
  17940 \endcode
  17941 
  17942 When this feature is enabled, number of bytes specified as `VMA_DEBUG_MARGIN`
  17943 (it must be multiply of 4) after every allocation is filled with a magic number.
  17944 This idea is also know as "canary".
  17945 Memory is automatically mapped and unmapped if necessary.
  17946 
  17947 This number is validated automatically when the allocation is destroyed.
  17948 If it is not equal to the expected value, `VMA_ASSERT()` is executed.
  17949 It clearly means that either CPU or GPU overwritten the memory outside of boundaries of the allocation,
  17950 which indicates a serious bug.
  17951 
  17952 You can also explicitly request checking margins of all allocations in all memory blocks
  17953 that belong to specified memory types by using function vmaCheckCorruption(),
  17954 or in memory blocks that belong to specified custom pool, by using function
  17955 vmaCheckPoolCorruption().
  17956 
  17957 Margin validation (corruption detection) works only for memory types that are
  17958 `HOST_VISIBLE` and `HOST_COHERENT`.
  17959 
  17960 
  17961 \section debugging_memory_usage_leak_detection Leak detection features
  17962 
  17963 At allocation and allocator destruction time VMA checks for unfreed and unmapped blocks using
  17964 `VMA_ASSERT_LEAK()`. This macro defaults to an assertion, triggering a typically fatal error in Debug
  17965 builds, and doing nothing in Release builds. You can provide your own definition of `VMA_ASSERT_LEAK()`
  17966 to change this behavior.
  17967 
  17968 At memory block destruction time VMA lists out all unfreed allocations using the `VMA_LEAK_LOG_FORMAT()`
  17969 macro, which defaults to `VMA_DEBUG_LOG_FORMAT`, which in turn defaults to a no-op.
  17970 If you're having trouble with leaks - for example, the aforementioned assertion triggers, but you don't
  17971 quite know \em why -, overriding this macro to print out the the leaking blocks, combined with assigning
  17972 individual names to allocations using vmaSetAllocationName(), can greatly aid in fixing them.
  17973 
  17974 \page other_api_interop Interop with other graphics APIs
  17975 
  17976 VMA provides some features that help with interoperability with other graphics APIs, e.g. OpenGL.
  17977 
  17978 \section opengl_interop_exporting_memory Exporting memory
  17979 
  17980 If you want to attach `VkExportMemoryAllocateInfoKHR` or other structure to `pNext` chain of memory allocations made by the library:
  17981 
  17982 You can create \ref custom_memory_pools for such allocations.
  17983 Define and fill in your `VkExportMemoryAllocateInfoKHR` structure and attach it to VmaPoolCreateInfo::pMemoryAllocateNext
  17984 while creating the custom pool.
  17985 Please note that the structure must remain alive and unchanged for the whole lifetime of the #VmaPool,
  17986 not only while creating it, as no copy of the structure is made,
  17987 but its original pointer is used for each allocation instead.
  17988 
  17989 If you want to export all memory allocated by VMA from certain memory types,
  17990 also dedicated allocations or other allocations made from default pools,
  17991 an alternative solution is to fill in VmaAllocatorCreateInfo::pTypeExternalMemoryHandleTypes.
  17992 It should point to an array with `VkExternalMemoryHandleTypeFlagsKHR` to be automatically passed by the library
  17993 through `VkExportMemoryAllocateInfoKHR` on each allocation made from a specific memory type.
  17994 Please note that new versions of the library also support dedicated allocations created in custom pools.
  17995 
  17996 You should not mix these two methods in a way that allows to apply both to the same memory type.
  17997 Otherwise, `VkExportMemoryAllocateInfoKHR` structure would be attached twice to the `pNext` chain of `VkMemoryAllocateInfo`.
  17998 
  17999 
  18000 \section opengl_interop_custom_alignment Custom alignment
  18001 
  18002 Buffers or images exported to a different API like OpenGL may require a different alignment,
  18003 higher than the one used by the library automatically, queried from functions like `vkGetBufferMemoryRequirements`.
  18004 To impose such alignment:
  18005 
  18006 You can create \ref custom_memory_pools for such allocations.
  18007 Set VmaPoolCreateInfo::minAllocationAlignment member to the minimum alignment required for each allocation
  18008 to be made out of this pool.
  18009 The alignment actually used will be the maximum of this member and the alignment returned for the specific buffer or image
  18010 from a function like `vkGetBufferMemoryRequirements`, which is called by VMA automatically.
  18011 
  18012 If you want to create a buffer with a specific minimum alignment out of default pools,
  18013 use special function vmaCreateBufferWithAlignment(), which takes additional parameter `minAlignment`.
  18014 
  18015 Note the problem of alignment affects only resources placed inside bigger `VkDeviceMemory` blocks and not dedicated
  18016 allocations, as these, by definition, always have alignment = 0 because the resource is bound to the beginning of its dedicated block.
  18017 You can ensure that an allocation is created as dedicated by using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
  18018 Contrary to Direct3D 12, Vulkan doesn't have a concept of alignment of the entire memory block passed on its allocation.
  18019 
  18020 \section opengl_interop_extended_allocation_information Extended allocation information
  18021 
  18022 If you want to rely on VMA to allocate your buffers and images inside larger memory blocks,
  18023 but you need to know the size of the entire block and whether the allocation was made
  18024 with its own dedicated memory, use function vmaGetAllocationInfo2() to retrieve
  18025 extended allocation information in structure #VmaAllocationInfo2.
  18026 
  18027 
  18028 
  18029 \page usage_patterns Recommended usage patterns
  18030 
  18031 Vulkan gives great flexibility in memory allocation.
  18032 This chapter shows the most common patterns.
  18033 
  18034 See also slides from talk:
  18035 [Sawicki, Adam. Advanced Graphics Techniques Tutorial: Memory management in Vulkan and DX12. Game Developers Conference, 2018](https://www.gdcvault.com/play/1025458/Advanced-Graphics-Techniques-Tutorial-New)
  18036 
  18037 
  18038 \section usage_patterns_gpu_only GPU-only resource
  18039 
  18040 <b>When:</b>
  18041 Any resources that you frequently write and read on GPU,
  18042 e.g. images used as color attachments (aka "render targets"), depth-stencil attachments,
  18043 images/buffers used as storage image/buffer (aka "Unordered Access View (UAV)").
  18044 
  18045 <b>What to do:</b>
  18046 Let the library select the optimal memory type, which will likely have `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT`.
  18047 
  18048 \code
  18049 VkImageCreateInfo imgCreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
  18050 imgCreateInfo.imageType = VK_IMAGE_TYPE_2D;
  18051 imgCreateInfo.extent.width = 3840;
  18052 imgCreateInfo.extent.height = 2160;
  18053 imgCreateInfo.extent.depth = 1;
  18054 imgCreateInfo.mipLevels = 1;
  18055 imgCreateInfo.arrayLayers = 1;
  18056 imgCreateInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
  18057 imgCreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
  18058 imgCreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
  18059 imgCreateInfo.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
  18060 imgCreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
  18061 
  18062 VmaAllocationCreateInfo allocCreateInfo = {};
  18063 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
  18064 allocCreateInfo.flags = VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
  18065 allocCreateInfo.priority = 1.0f;
  18066 
  18067 VkImage img;
  18068 VmaAllocation alloc;
  18069 vmaCreateImage(allocator, &imgCreateInfo, &allocCreateInfo, &img, &alloc, nullptr);
  18070 \endcode
  18071 
  18072 <b>Also consider:</b>
  18073 Consider creating them as dedicated allocations using #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT,
  18074 especially if they are large or if you plan to destroy and recreate them with different sizes
  18075 e.g. when display resolution changes.
  18076 Prefer to create such resources first and all other GPU resources (like textures and vertex buffers) later.
  18077 When VK_EXT_memory_priority extension is enabled, it is also worth setting high priority to such allocation
  18078 to decrease chances to be evicted to system memory by the operating system.
  18079 
  18080 \section usage_patterns_staging_copy_upload Staging copy for upload
  18081 
  18082 <b>When:</b>
  18083 A "staging" buffer than you want to map and fill from CPU code, then use as a source of transfer
  18084 to some GPU resource.
  18085 
  18086 <b>What to do:</b>
  18087 Use flag #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT.
  18088 Let the library select the optimal memory type, which will always have `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`.
  18089 
  18090 \code
  18091 VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
  18092 bufCreateInfo.size = 65536;
  18093 bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
  18094 
  18095 VmaAllocationCreateInfo allocCreateInfo = {};
  18096 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
  18097 allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
  18098     VMA_ALLOCATION_CREATE_MAPPED_BIT;
  18099 
  18100 VkBuffer buf;
  18101 VmaAllocation alloc;
  18102 VmaAllocationInfo allocInfo;
  18103 vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
  18104 
  18105 ...
  18106 
  18107 memcpy(allocInfo.pMappedData, myData, myDataSize);
  18108 \endcode
  18109 
  18110 <b>Also consider:</b>
  18111 You can map the allocation using vmaMapMemory() or you can create it as persistenly mapped
  18112 using #VMA_ALLOCATION_CREATE_MAPPED_BIT, as in the example above.
  18113 
  18114 
  18115 \section usage_patterns_readback Readback
  18116 
  18117 <b>When:</b>
  18118 Buffers for data written by or transferred from the GPU that you want to read back on the CPU,
  18119 e.g. results of some computations.
  18120 
  18121 <b>What to do:</b>
  18122 Use flag #VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT.
  18123 Let the library select the optimal memory type, which will always have `VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT`
  18124 and `VK_MEMORY_PROPERTY_HOST_CACHED_BIT`.
  18125 
  18126 \code
  18127 VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
  18128 bufCreateInfo.size = 65536;
  18129 bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_DST_BIT;
  18130 
  18131 VmaAllocationCreateInfo allocCreateInfo = {};
  18132 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
  18133 allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_RANDOM_BIT |
  18134     VMA_ALLOCATION_CREATE_MAPPED_BIT;
  18135 
  18136 VkBuffer buf;
  18137 VmaAllocation alloc;
  18138 VmaAllocationInfo allocInfo;
  18139 vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
  18140 
  18141 ...
  18142 
  18143 const float* downloadedData = (const float*)allocInfo.pMappedData;
  18144 \endcode
  18145 
  18146 
  18147 \section usage_patterns_advanced_data_uploading Advanced data uploading
  18148 
  18149 For resources that you frequently write on CPU via mapped pointer and
  18150 frequently read on GPU e.g. as a uniform buffer (also called "dynamic"), multiple options are possible:
  18151 
  18152 -# Easiest solution is to have one copy of the resource in `HOST_VISIBLE` memory,
  18153    even if it means system RAM (not `DEVICE_LOCAL`) on systems with a discrete graphics card,
  18154    and make the device reach out to that resource directly.
  18155    - Reads performed by the device will then go through PCI Express bus.
  18156      The performance of this access may be limited, but it may be fine depending on the size
  18157      of this resource (whether it is small enough to quickly end up in GPU cache) and the sparsity
  18158      of access.
  18159 -# On systems with unified memory (e.g. AMD APU or Intel integrated graphics, mobile chips),
  18160    a memory type may be available that is both `HOST_VISIBLE` (available for mapping) and `DEVICE_LOCAL`
  18161    (fast to access from the GPU). Then, it is likely the best choice for such type of resource.
  18162 -# Systems with a discrete graphics card and separate video memory may or may not expose
  18163    a memory type that is both `HOST_VISIBLE` and `DEVICE_LOCAL`, also known as Base Address Register (BAR).
  18164    If they do, it represents a piece of VRAM (or entire VRAM, if ReBAR is enabled in the motherboard BIOS)
  18165    that is available to CPU for mapping.
  18166    - Writes performed by the host to that memory go through PCI Express bus.
  18167      The performance of these writes may be limited, but it may be fine, especially on PCIe 4.0,
  18168      as long as rules of using uncached and write-combined memory are followed - only sequential writes and no reads.
  18169 -# Finally, you may need or prefer to create a separate copy of the resource in `DEVICE_LOCAL` memory,
  18170    a separate "staging" copy in `HOST_VISIBLE` memory and perform an explicit transfer command between them.
  18171 
  18172 Thankfully, VMA offers an aid to create and use such resources in the the way optimal
  18173 for the current Vulkan device. To help the library make the best choice,
  18174 use flag #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT together with
  18175 #VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT.
  18176 It will then prefer a memory type that is both `DEVICE_LOCAL` and `HOST_VISIBLE` (integrated memory or BAR),
  18177 but if no such memory type is available or allocation from it fails
  18178 (PC graphics cards have only 256 MB of BAR by default, unless ReBAR is supported and enabled in BIOS),
  18179 it will fall back to `DEVICE_LOCAL` memory for fast GPU access.
  18180 It is then up to you to detect that the allocation ended up in a memory type that is not `HOST_VISIBLE`,
  18181 so you need to create another "staging" allocation and perform explicit transfers.
  18182 
  18183 \code
  18184 VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
  18185 bufCreateInfo.size = 65536;
  18186 bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
  18187 
  18188 VmaAllocationCreateInfo allocCreateInfo = {};
  18189 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
  18190 allocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
  18191     VMA_ALLOCATION_CREATE_HOST_ACCESS_ALLOW_TRANSFER_INSTEAD_BIT |
  18192     VMA_ALLOCATION_CREATE_MAPPED_BIT;
  18193 
  18194 VkBuffer buf;
  18195 VmaAllocation alloc;
  18196 VmaAllocationInfo allocInfo;
  18197 VkResult result = vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
  18198 // Check result...
  18199 
  18200 VkMemoryPropertyFlags memPropFlags;
  18201 vmaGetAllocationMemoryProperties(allocator, alloc, &memPropFlags);
  18202 
  18203 if(memPropFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT)
  18204 {
  18205     // Allocation ended up in a mappable memory and is already mapped - write to it directly.
  18206 
  18207     // [Executed in runtime]:
  18208     memcpy(allocInfo.pMappedData, myData, myDataSize);
  18209     result = vmaFlushAllocation(allocator, alloc, 0, VK_WHOLE_SIZE);
  18210     // Check result...
  18211 
  18212     VkBufferMemoryBarrier bufMemBarrier = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER };
  18213     bufMemBarrier.srcAccessMask = VK_ACCESS_HOST_WRITE_BIT;
  18214     bufMemBarrier.dstAccessMask = VK_ACCESS_UNIFORM_READ_BIT;
  18215     bufMemBarrier.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
  18216     bufMemBarrier.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
  18217     bufMemBarrier.buffer = buf;
  18218     bufMemBarrier.offset = 0;
  18219     bufMemBarrier.size = VK_WHOLE_SIZE;
  18220 
  18221     vkCmdPipelineBarrier(cmdBuf, VK_PIPELINE_STAGE_HOST_BIT, VK_PIPELINE_STAGE_VERTEX_SHADER_BIT,
  18222         0, 0, nullptr, 1, &bufMemBarrier, 0, nullptr);
  18223 }
  18224 else
  18225 {
  18226     // Allocation ended up in a non-mappable memory - a transfer using a staging buffer is required.
  18227     VkBufferCreateInfo stagingBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
  18228     stagingBufCreateInfo.size = 65536;
  18229     stagingBufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
  18230 
  18231     VmaAllocationCreateInfo stagingAllocCreateInfo = {};
  18232     stagingAllocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
  18233     stagingAllocCreateInfo.flags = VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT |
  18234         VMA_ALLOCATION_CREATE_MAPPED_BIT;
  18235 
  18236     VkBuffer stagingBuf;
  18237     VmaAllocation stagingAlloc;
  18238     VmaAllocationInfo stagingAllocInfo;
  18239     result = vmaCreateBuffer(allocator, &stagingBufCreateInfo, &stagingAllocCreateInfo,
  18240         &stagingBuf, &stagingAlloc, &stagingAllocInfo);
  18241     // Check result...
  18242 
  18243     // [Executed in runtime]:
  18244     memcpy(stagingAllocInfo.pMappedData, myData, myDataSize);
  18245     result = vmaFlushAllocation(allocator, stagingAlloc, 0, VK_WHOLE_SIZE);
  18246     // Check result...
  18247 
  18248     VkBufferMemoryBarrier bufMemBarrier = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER };
  18249     bufMemBarrier.srcAccessMask = VK_ACCESS_HOST_WRITE_BIT;
  18250     bufMemBarrier.dstAccessMask = VK_ACCESS_TRANSFER_READ_BIT;
  18251     bufMemBarrier.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
  18252     bufMemBarrier.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
  18253     bufMemBarrier.buffer = stagingBuf;
  18254     bufMemBarrier.offset = 0;
  18255     bufMemBarrier.size = VK_WHOLE_SIZE;
  18256 
  18257     vkCmdPipelineBarrier(cmdBuf, VK_PIPELINE_STAGE_HOST_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT,
  18258         0, 0, nullptr, 1, &bufMemBarrier, 0, nullptr);
  18259 
  18260     VkBufferCopy bufCopy = {
  18261         0, // srcOffset
  18262         0, // dstOffset,
  18263         myDataSize, // size
  18264     };
  18265 
  18266     vkCmdCopyBuffer(cmdBuf, stagingBuf, buf, 1, &bufCopy);
  18267 
  18268     VkBufferMemoryBarrier bufMemBarrier2 = { VK_STRUCTURE_TYPE_BUFFER_MEMORY_BARRIER };
  18269     bufMemBarrier2.srcAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT;
  18270     bufMemBarrier2.dstAccessMask = VK_ACCESS_UNIFORM_READ_BIT; // We created a uniform buffer
  18271     bufMemBarrier2.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
  18272     bufMemBarrier2.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED;
  18273     bufMemBarrier2.buffer = buf;
  18274     bufMemBarrier2.offset = 0;
  18275     bufMemBarrier2.size = VK_WHOLE_SIZE;
  18276 
  18277     vkCmdPipelineBarrier(cmdBuf, VK_PIPELINE_STAGE_TRANSFER_BIT, VK_PIPELINE_STAGE_VERTEX_SHADER_BIT,
  18278         0, 0, nullptr, 1, &bufMemBarrier2, 0, nullptr);
  18279 }
  18280 \endcode
  18281 
  18282 \section usage_patterns_other_use_cases Other use cases
  18283 
  18284 Here are some other, less obvious use cases and their recommended settings:
  18285 
  18286 - An image that is used only as transfer source and destination, but it should stay on the device,
  18287   as it is used to temporarily store a copy of some texture, e.g. from the current to the next frame,
  18288   for temporal antialiasing or other temporal effects.
  18289   - Use `VkImageCreateInfo::usage = VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT`
  18290   - Use VmaAllocationCreateInfo::usage = #VMA_MEMORY_USAGE_AUTO
  18291 - An image that is used only as transfer source and destination, but it should be placed
  18292   in the system RAM despite it doesn't need to be mapped, because it serves as a "swap" copy to evict
  18293   least recently used textures from VRAM.
  18294   - Use `VkImageCreateInfo::usage = VK_IMAGE_USAGE_TRANSFER_SRC_BIT | VK_IMAGE_USAGE_TRANSFER_DST_BIT`
  18295   - Use VmaAllocationCreateInfo::usage = #VMA_MEMORY_USAGE_AUTO_PREFER_HOST,
  18296     as VMA needs a hint here to differentiate from the previous case.
  18297 - A buffer that you want to map and write from the CPU, directly read from the GPU
  18298   (e.g. as a uniform or vertex buffer), but you have a clear preference to place it in device or
  18299   host memory due to its large size.
  18300   - Use `VkBufferCreateInfo::usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT`
  18301   - Use VmaAllocationCreateInfo::usage = #VMA_MEMORY_USAGE_AUTO_PREFER_DEVICE or #VMA_MEMORY_USAGE_AUTO_PREFER_HOST
  18302   - Use VmaAllocationCreateInfo::flags = #VMA_ALLOCATION_CREATE_HOST_ACCESS_SEQUENTIAL_WRITE_BIT
  18303 
  18304 
  18305 \page configuration Configuration
  18306 
  18307 Please check "CONFIGURATION SECTION" in the code to find macros that you can define
  18308 before each include of this file or change directly in this file to provide
  18309 your own implementation of basic facilities like assert, `min()` and `max()` functions,
  18310 mutex, atomic etc.
  18311 
  18312 For example, define `VMA_ASSERT(expr)` before including the library to provide
  18313 custom implementation of the assertion, compatible with your project.
  18314 By default it is defined to standard C `assert(expr)` in `_DEBUG` configuration
  18315 and empty otherwise.
  18316 
  18317 Similarly, you can define `VMA_LEAK_LOG_FORMAT` macro to enable printing of leaked (unfreed) allocations,
  18318 including their names and other parameters. Example:
  18319 
  18320 \code
  18321 #define VMA_LEAK_LOG_FORMAT(format, ...) do { \
  18322         printf((format), __VA_ARGS__); \
  18323         printf("\n"); \
  18324     } while(false)
  18325 \endcode
  18326 
  18327 \section config_Vulkan_functions Pointers to Vulkan functions
  18328 
  18329 There are multiple ways to import pointers to Vulkan functions in the library.
  18330 In the simplest case you don't need to do anything.
  18331 If the compilation or linking of your program or the initialization of the #VmaAllocator
  18332 doesn't work for you, you can try to reconfigure it.
  18333 
  18334 First, the allocator tries to fetch pointers to Vulkan functions linked statically,
  18335 like this:
  18336 
  18337 \code
  18338 m_VulkanFunctions.vkAllocateMemory = (PFN_vkAllocateMemory)vkAllocateMemory;
  18339 \endcode
  18340 
  18341 If you want to disable this feature, set configuration macro: `#define VMA_STATIC_VULKAN_FUNCTIONS 0`.
  18342 
  18343 Second, you can provide the pointers yourself by setting member VmaAllocatorCreateInfo::pVulkanFunctions.
  18344 You can fetch them e.g. using functions `vkGetInstanceProcAddr` and `vkGetDeviceProcAddr` or
  18345 by using a helper library like [volk](https://github.com/zeux/volk).
  18346 
  18347 Third, VMA tries to fetch remaining pointers that are still null by calling
  18348 `vkGetInstanceProcAddr` and `vkGetDeviceProcAddr` on its own.
  18349 You need to only fill in VmaVulkanFunctions::vkGetInstanceProcAddr and VmaVulkanFunctions::vkGetDeviceProcAddr.
  18350 Other pointers will be fetched automatically.
  18351 If you want to disable this feature, set configuration macro: `#define VMA_DYNAMIC_VULKAN_FUNCTIONS 0`.
  18352 
  18353 Finally, all the function pointers required by the library (considering selected
  18354 Vulkan version and enabled extensions) are checked with `VMA_ASSERT` if they are not null.
  18355 
  18356 
  18357 \section custom_memory_allocator Custom host memory allocator
  18358 
  18359 If you use custom allocator for CPU memory rather than default operator `new`
  18360 and `delete` from C++, you can make this library using your allocator as well
  18361 by filling optional member VmaAllocatorCreateInfo::pAllocationCallbacks. These
  18362 functions will be passed to Vulkan, as well as used by the library itself to
  18363 make any CPU-side allocations.
  18364 
  18365 \section allocation_callbacks Device memory allocation callbacks
  18366 
  18367 The library makes calls to `vkAllocateMemory()` and `vkFreeMemory()` internally.
  18368 You can setup callbacks to be informed about these calls, e.g. for the purpose
  18369 of gathering some statistics. To do it, fill optional member
  18370 VmaAllocatorCreateInfo::pDeviceMemoryCallbacks.
  18371 
  18372 \section heap_memory_limit Device heap memory limit
  18373 
  18374 When device memory of certain heap runs out of free space, new allocations may
  18375 fail (returning error code) or they may succeed, silently pushing some existing_
  18376 memory blocks from GPU VRAM to system RAM (which degrades performance). This
  18377 behavior is implementation-dependent - it depends on GPU vendor and graphics
  18378 driver.
  18379 
  18380 On AMD cards it can be controlled while creating Vulkan device object by using
  18381 VK_AMD_memory_overallocation_behavior extension, if available.
  18382 
  18383 Alternatively, if you want to test how your program behaves with limited amount of Vulkan device
  18384 memory available without switching your graphics card to one that really has
  18385 smaller VRAM, you can use a feature of this library intended for this purpose.
  18386 To do it, fill optional member VmaAllocatorCreateInfo::pHeapSizeLimit.
  18387 
  18388 
  18389 
  18390 \page vk_khr_dedicated_allocation VK_KHR_dedicated_allocation
  18391 
  18392 VK_KHR_dedicated_allocation is a Vulkan extension which can be used to improve
  18393 performance on some GPUs. It augments Vulkan API with possibility to query
  18394 driver whether it prefers particular buffer or image to have its own, dedicated
  18395 allocation (separate `VkDeviceMemory` block) for better efficiency - to be able
  18396 to do some internal optimizations. The extension is supported by this library.
  18397 It will be used automatically when enabled.
  18398 
  18399 It has been promoted to core Vulkan 1.1, so if you use eligible Vulkan version
  18400 and inform VMA about it by setting VmaAllocatorCreateInfo::vulkanApiVersion,
  18401 you are all set.
  18402 
  18403 Otherwise, if you want to use it as an extension:
  18404 
  18405 1 . When creating Vulkan device, check if following 2 device extensions are
  18406 supported (call `vkEnumerateDeviceExtensionProperties()`).
  18407 If yes, enable them (fill `VkDeviceCreateInfo::ppEnabledExtensionNames`).
  18408 
  18409 - VK_KHR_get_memory_requirements2
  18410 - VK_KHR_dedicated_allocation
  18411 
  18412 If you enabled these extensions:
  18413 
  18414 2 . Use #VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT flag when creating
  18415 your #VmaAllocator to inform the library that you enabled required extensions
  18416 and you want the library to use them.
  18417 
  18418 \code
  18419 allocatorInfo.flags |= VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT;
  18420 
  18421 vmaCreateAllocator(&allocatorInfo, &allocator);
  18422 \endcode
  18423 
  18424 That is all. The extension will be automatically used whenever you create a
  18425 buffer using vmaCreateBuffer() or image using vmaCreateImage().
  18426 
  18427 When using the extension together with Vulkan Validation Layer, you will receive
  18428 warnings like this:
  18429 
  18430 _vkBindBufferMemory(): Binding memory to buffer 0x33 but vkGetBufferMemoryRequirements() has not been called on that buffer._
  18431 
  18432 It is OK, you should just ignore it. It happens because you use function
  18433 `vkGetBufferMemoryRequirements2KHR()` instead of standard
  18434 `vkGetBufferMemoryRequirements()`, while the validation layer seems to be
  18435 unaware of it.
  18436 
  18437 To learn more about this extension, see:
  18438 
  18439 - [VK_KHR_dedicated_allocation in Vulkan specification](https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/chap50.html#VK_KHR_dedicated_allocation)
  18440 - [VK_KHR_dedicated_allocation unofficial manual](http://asawicki.info/articles/VK_KHR_dedicated_allocation.php5)
  18441 
  18442 
  18443 
  18444 \page vk_ext_memory_priority VK_EXT_memory_priority
  18445 
  18446 VK_EXT_memory_priority is a device extension that allows to pass additional "priority"
  18447 value to Vulkan memory allocations that the implementation may use prefer certain
  18448 buffers and images that are critical for performance to stay in device-local memory
  18449 in cases when the memory is over-subscribed, while some others may be moved to the system memory.
  18450 
  18451 VMA offers convenient usage of this extension.
  18452 If you enable it, you can pass "priority" parameter when creating allocations or custom pools
  18453 and the library automatically passes the value to Vulkan using this extension.
  18454 
  18455 If you want to use this extension in connection with VMA, follow these steps:
  18456 
  18457 \section vk_ext_memory_priority_initialization Initialization
  18458 
  18459 1) Call `vkEnumerateDeviceExtensionProperties` for the physical device.
  18460 Check if the extension is supported - if returned array of `VkExtensionProperties` contains "VK_EXT_memory_priority".
  18461 
  18462 2) Call `vkGetPhysicalDeviceFeatures2` for the physical device instead of old `vkGetPhysicalDeviceFeatures`.
  18463 Attach additional structure `VkPhysicalDeviceMemoryPriorityFeaturesEXT` to `VkPhysicalDeviceFeatures2::pNext` to be returned.
  18464 Check if the device feature is really supported - check if `VkPhysicalDeviceMemoryPriorityFeaturesEXT::memoryPriority` is true.
  18465 
  18466 3) While creating device with `vkCreateDevice`, enable this extension - add "VK_EXT_memory_priority"
  18467 to the list passed as `VkDeviceCreateInfo::ppEnabledExtensionNames`.
  18468 
  18469 4) While creating the device, also don't set `VkDeviceCreateInfo::pEnabledFeatures`.
  18470 Fill in `VkPhysicalDeviceFeatures2` structure instead and pass it as `VkDeviceCreateInfo::pNext`.
  18471 Enable this device feature - attach additional structure `VkPhysicalDeviceMemoryPriorityFeaturesEXT` to
  18472 `VkPhysicalDeviceFeatures2::pNext` chain and set its member `memoryPriority` to `VK_TRUE`.
  18473 
  18474 5) While creating #VmaAllocator with vmaCreateAllocator() inform VMA that you
  18475 have enabled this extension and feature - add #VMA_ALLOCATOR_CREATE_EXT_MEMORY_PRIORITY_BIT
  18476 to VmaAllocatorCreateInfo::flags.
  18477 
  18478 \section vk_ext_memory_priority_usage Usage
  18479 
  18480 When using this extension, you should initialize following member:
  18481 
  18482 - VmaAllocationCreateInfo::priority when creating a dedicated allocation with #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
  18483 - VmaPoolCreateInfo::priority when creating a custom pool.
  18484 
  18485 It should be a floating-point value between `0.0f` and `1.0f`, where recommended default is `0.5f`.
  18486 Memory allocated with higher value can be treated by the Vulkan implementation as higher priority
  18487 and so it can have lower chances of being pushed out to system memory, experiencing degraded performance.
  18488 
  18489 It might be a good idea to create performance-critical resources like color-attachment or depth-stencil images
  18490 as dedicated and set high priority to them. For example:
  18491 
  18492 \code
  18493 VkImageCreateInfo imgCreateInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
  18494 imgCreateInfo.imageType = VK_IMAGE_TYPE_2D;
  18495 imgCreateInfo.extent.width = 3840;
  18496 imgCreateInfo.extent.height = 2160;
  18497 imgCreateInfo.extent.depth = 1;
  18498 imgCreateInfo.mipLevels = 1;
  18499 imgCreateInfo.arrayLayers = 1;
  18500 imgCreateInfo.format = VK_FORMAT_R8G8B8A8_UNORM;
  18501 imgCreateInfo.tiling = VK_IMAGE_TILING_OPTIMAL;
  18502 imgCreateInfo.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
  18503 imgCreateInfo.usage = VK_IMAGE_USAGE_SAMPLED_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
  18504 imgCreateInfo.samples = VK_SAMPLE_COUNT_1_BIT;
  18505 
  18506 VmaAllocationCreateInfo allocCreateInfo = {};
  18507 allocCreateInfo.usage = VMA_MEMORY_USAGE_AUTO;
  18508 allocCreateInfo.flags = VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT;
  18509 allocCreateInfo.priority = 1.0f;
  18510 
  18511 VkImage img;
  18512 VmaAllocation alloc;
  18513 vmaCreateImage(allocator, &imgCreateInfo, &allocCreateInfo, &img, &alloc, nullptr);
  18514 \endcode
  18515 
  18516 `priority` member is ignored in the following situations:
  18517 
  18518 - Allocations created in custom pools: They inherit the priority, along with all other allocation parameters
  18519   from the parameters passed in #VmaPoolCreateInfo when the pool was created.
  18520 - Allocations created in default pools: They inherit the priority from the parameters
  18521   VMA used when creating default pools, which means `priority == 0.5f`.
  18522 
  18523 
  18524 \page vk_amd_device_coherent_memory VK_AMD_device_coherent_memory
  18525 
  18526 VK_AMD_device_coherent_memory is a device extension that enables access to
  18527 additional memory types with `VK_MEMORY_PROPERTY_DEVICE_COHERENT_BIT_AMD` and
  18528 `VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD` flag. It is useful mostly for
  18529 allocation of buffers intended for writing "breadcrumb markers" in between passes
  18530 or draw calls, which in turn are useful for debugging GPU crash/hang/TDR cases.
  18531 
  18532 When the extension is available but has not been enabled, Vulkan physical device
  18533 still exposes those memory types, but their usage is forbidden. VMA automatically
  18534 takes care of that - it returns `VK_ERROR_FEATURE_NOT_PRESENT` when an attempt
  18535 to allocate memory of such type is made.
  18536 
  18537 If you want to use this extension in connection with VMA, follow these steps:
  18538 
  18539 \section vk_amd_device_coherent_memory_initialization Initialization
  18540 
  18541 1) Call `vkEnumerateDeviceExtensionProperties` for the physical device.
  18542 Check if the extension is supported - if returned array of `VkExtensionProperties` contains "VK_AMD_device_coherent_memory".
  18543 
  18544 2) Call `vkGetPhysicalDeviceFeatures2` for the physical device instead of old `vkGetPhysicalDeviceFeatures`.
  18545 Attach additional structure `VkPhysicalDeviceCoherentMemoryFeaturesAMD` to `VkPhysicalDeviceFeatures2::pNext` to be returned.
  18546 Check if the device feature is really supported - check if `VkPhysicalDeviceCoherentMemoryFeaturesAMD::deviceCoherentMemory` is true.
  18547 
  18548 3) While creating device with `vkCreateDevice`, enable this extension - add "VK_AMD_device_coherent_memory"
  18549 to the list passed as `VkDeviceCreateInfo::ppEnabledExtensionNames`.
  18550 
  18551 4) While creating the device, also don't set `VkDeviceCreateInfo::pEnabledFeatures`.
  18552 Fill in `VkPhysicalDeviceFeatures2` structure instead and pass it as `VkDeviceCreateInfo::pNext`.
  18553 Enable this device feature - attach additional structure `VkPhysicalDeviceCoherentMemoryFeaturesAMD` to
  18554 `VkPhysicalDeviceFeatures2::pNext` and set its member `deviceCoherentMemory` to `VK_TRUE`.
  18555 
  18556 5) While creating #VmaAllocator with vmaCreateAllocator() inform VMA that you
  18557 have enabled this extension and feature - add #VMA_ALLOCATOR_CREATE_AMD_DEVICE_COHERENT_MEMORY_BIT
  18558 to VmaAllocatorCreateInfo::flags.
  18559 
  18560 \section vk_amd_device_coherent_memory_usage Usage
  18561 
  18562 After following steps described above, you can create VMA allocations and custom pools
  18563 out of the special `DEVICE_COHERENT` and `DEVICE_UNCACHED` memory types on eligible
  18564 devices. There are multiple ways to do it, for example:
  18565 
  18566 - You can request or prefer to allocate out of such memory types by adding
  18567   `VK_MEMORY_PROPERTY_DEVICE_UNCACHED_BIT_AMD` to VmaAllocationCreateInfo::requiredFlags
  18568   or VmaAllocationCreateInfo::preferredFlags. Those flags can be freely mixed with
  18569   other ways of \ref choosing_memory_type, like setting VmaAllocationCreateInfo::usage.
  18570 - If you manually found memory type index to use for this purpose, force allocation
  18571   from this specific index by setting VmaAllocationCreateInfo::memoryTypeBits `= 1u << index`.
  18572 
  18573 \section vk_amd_device_coherent_memory_more_information More information
  18574 
  18575 To learn more about this extension, see [VK_AMD_device_coherent_memory in Vulkan specification](https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/VK_AMD_device_coherent_memory.html)
  18576 
  18577 Example use of this extension can be found in the code of the sample and test suite
  18578 accompanying this library.
  18579 
  18580 
  18581 \page enabling_buffer_device_address Enabling buffer device address
  18582 
  18583 Device extension VK_KHR_buffer_device_address
  18584 allow to fetch raw GPU pointer to a buffer and pass it for usage in a shader code.
  18585 It has been promoted to core Vulkan 1.2.
  18586 
  18587 If you want to use this feature in connection with VMA, follow these steps:
  18588 
  18589 \section enabling_buffer_device_address_initialization Initialization
  18590 
  18591 1) (For Vulkan version < 1.2) Call `vkEnumerateDeviceExtensionProperties` for the physical device.
  18592 Check if the extension is supported - if returned array of `VkExtensionProperties` contains
  18593 "VK_KHR_buffer_device_address".
  18594 
  18595 2) Call `vkGetPhysicalDeviceFeatures2` for the physical device instead of old `vkGetPhysicalDeviceFeatures`.
  18596 Attach additional structure `VkPhysicalDeviceBufferDeviceAddressFeatures*` to `VkPhysicalDeviceFeatures2::pNext` to be returned.
  18597 Check if the device feature is really supported - check if `VkPhysicalDeviceBufferDeviceAddressFeatures::bufferDeviceAddress` is true.
  18598 
  18599 3) (For Vulkan version < 1.2) While creating device with `vkCreateDevice`, enable this extension - add
  18600 "VK_KHR_buffer_device_address" to the list passed as `VkDeviceCreateInfo::ppEnabledExtensionNames`.
  18601 
  18602 4) While creating the device, also don't set `VkDeviceCreateInfo::pEnabledFeatures`.
  18603 Fill in `VkPhysicalDeviceFeatures2` structure instead and pass it as `VkDeviceCreateInfo::pNext`.
  18604 Enable this device feature - attach additional structure `VkPhysicalDeviceBufferDeviceAddressFeatures*` to
  18605 `VkPhysicalDeviceFeatures2::pNext` and set its member `bufferDeviceAddress` to `VK_TRUE`.
  18606 
  18607 5) While creating #VmaAllocator with vmaCreateAllocator() inform VMA that you
  18608 have enabled this feature - add #VMA_ALLOCATOR_CREATE_BUFFER_DEVICE_ADDRESS_BIT
  18609 to VmaAllocatorCreateInfo::flags.
  18610 
  18611 \section enabling_buffer_device_address_usage Usage
  18612 
  18613 After following steps described above, you can create buffers with `VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT*` using VMA.
  18614 The library automatically adds `VK_MEMORY_ALLOCATE_DEVICE_ADDRESS_BIT*` to
  18615 allocated memory blocks wherever it might be needed.
  18616 
  18617 Please note that the library supports only `VK_BUFFER_USAGE_SHADER_DEVICE_ADDRESS_BIT*`.
  18618 The second part of this functionality related to "capture and replay" is not supported,
  18619 as it is intended for usage in debugging tools like RenderDoc, not in everyday Vulkan usage.
  18620 
  18621 \section enabling_buffer_device_address_more_information More information
  18622 
  18623 To learn more about this extension, see [VK_KHR_buffer_device_address in Vulkan specification](https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/chap46.html#VK_KHR_buffer_device_address)
  18624 
  18625 Example use of this extension can be found in the code of the sample and test suite
  18626 accompanying this library.
  18627 
  18628 \page general_considerations General considerations
  18629 
  18630 \section general_considerations_thread_safety Thread safety
  18631 
  18632 - The library has no global state, so separate #VmaAllocator objects can be used
  18633   independently.
  18634   There should be no need to create multiple such objects though - one per `VkDevice` is enough.
  18635 - By default, all calls to functions that take #VmaAllocator as first parameter
  18636   are safe to call from multiple threads simultaneously because they are
  18637   synchronized internally when needed.
  18638   This includes allocation and deallocation from default memory pool, as well as custom #VmaPool.
  18639 - When the allocator is created with #VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT
  18640   flag, calls to functions that take such #VmaAllocator object must be
  18641   synchronized externally.
  18642 - Access to a #VmaAllocation object must be externally synchronized. For example,
  18643   you must not call vmaGetAllocationInfo() and vmaMapMemory() from different
  18644   threads at the same time if you pass the same #VmaAllocation object to these
  18645   functions.
  18646 - #VmaVirtualBlock is not safe to be used from multiple threads simultaneously.
  18647 
  18648 \section general_considerations_versioning_and_compatibility Versioning and compatibility
  18649 
  18650 The library uses [**Semantic Versioning**](https://semver.org/),
  18651 which means version numbers follow convention: Major.Minor.Patch (e.g. 2.3.0), where:
  18652 
  18653 - Incremented Patch version means a release is backward- and forward-compatible,
  18654   introducing only some internal improvements, bug fixes, optimizations etc.
  18655   or changes that are out of scope of the official API described in this documentation.
  18656 - Incremented Minor version means a release is backward-compatible,
  18657   so existing code that uses the library should continue to work, while some new
  18658   symbols could have been added: new structures, functions, new values in existing
  18659   enums and bit flags, new structure members, but not new function parameters.
  18660 - Incrementing Major version means a release could break some backward compatibility.
  18661 
  18662 All changes between official releases are documented in file "CHANGELOG.md".
  18663 
  18664 \warning Backward compatibility is considered on the level of C++ source code, not binary linkage.
  18665 Adding new members to existing structures is treated as backward compatible if initializing
  18666 the new members to binary zero results in the old behavior.
  18667 You should always fully initialize all library structures to zeros and not rely on their
  18668 exact binary size.
  18669 
  18670 \section general_considerations_validation_layer_warnings Validation layer warnings
  18671 
  18672 When using this library, you can meet following types of warnings issued by
  18673 Vulkan validation layer. They don't necessarily indicate a bug, so you may need
  18674 to just ignore them.
  18675 
  18676 - *vkBindBufferMemory(): Binding memory to buffer 0xeb8e4 but vkGetBufferMemoryRequirements() has not been called on that buffer.*
  18677   - It happens when VK_KHR_dedicated_allocation extension is enabled.
  18678     `vkGetBufferMemoryRequirements2KHR` function is used instead, while validation layer seems to be unaware of it.
  18679 - *Mapping an image with layout VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL can result in undefined behavior if this memory is used by the device. Only GENERAL or PREINITIALIZED should be used.*
  18680   - It happens when you map a buffer or image, because the library maps entire
  18681     `VkDeviceMemory` block, where different types of images and buffers may end
  18682     up together, especially on GPUs with unified memory like Intel.
  18683 - *Non-linear image 0xebc91 is aliased with linear buffer 0xeb8e4 which may indicate a bug.*
  18684   - It may happen when you use [defragmentation](@ref defragmentation).
  18685 
  18686 \section general_considerations_allocation_algorithm Allocation algorithm
  18687 
  18688 The library uses following algorithm for allocation, in order:
  18689 
  18690 -# Try to find free range of memory in existing blocks.
  18691 -# If failed, try to create a new block of `VkDeviceMemory`, with preferred block size.
  18692 -# If failed, try to create such block with size / 2, size / 4, size / 8.
  18693 -# If failed, try to allocate separate `VkDeviceMemory` for this allocation,
  18694    just like when you use #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT.
  18695 -# If failed, choose other memory type that meets the requirements specified in
  18696    VmaAllocationCreateInfo and go to point 1.
  18697 -# If failed, return `VK_ERROR_OUT_OF_DEVICE_MEMORY`.
  18698 
  18699 \section general_considerations_features_not_supported Features not supported
  18700 
  18701 Features deliberately excluded from the scope of this library:
  18702 
  18703 -# **Data transfer.** Uploading (streaming) and downloading data of buffers and images
  18704    between CPU and GPU memory and related synchronization is responsibility of the user.
  18705    Defining some "texture" object that would automatically stream its data from a
  18706    staging copy in CPU memory to GPU memory would rather be a feature of another,
  18707    higher-level library implemented on top of VMA.
  18708    VMA doesn't record any commands to a `VkCommandBuffer`. It just allocates memory.
  18709 -# **Recreation of buffers and images.** Although the library has functions for
  18710    buffer and image creation: vmaCreateBuffer(), vmaCreateImage(), you need to
  18711    recreate these objects yourself after defragmentation. That is because the big
  18712    structures `VkBufferCreateInfo`, `VkImageCreateInfo` are not stored in
  18713    #VmaAllocation object.
  18714 -# **Handling CPU memory allocation failures.** When dynamically creating small C++
  18715    objects in CPU memory (not Vulkan memory), allocation failures are not checked
  18716    and handled gracefully, because that would complicate code significantly and
  18717    is usually not needed in desktop PC applications anyway.
  18718    Success of an allocation is just checked with an assert.
  18719 -# **Code free of any compiler warnings.** Maintaining the library to compile and
  18720    work correctly on so many different platforms is hard enough. Being free of
  18721    any warnings, on any version of any compiler, is simply not feasible.
  18722    There are many preprocessor macros that make some variables unused, function parameters unreferenced,
  18723    or conditional expressions constant in some configurations.
  18724    The code of this library should not be bigger or more complicated just to silence these warnings.
  18725    It is recommended to disable such warnings instead.
  18726 -# This is a C++ library with C interface. **Bindings or ports to any other programming languages** are welcome as external projects but
  18727    are not going to be included into this repository.
  18728 */