Compare commits

...

30 Commits

Author SHA1 Message Date
Steveice10
2fe2dd1482 Merge branch 'master' into vk-fixes 2024-02-23 17:11:16 -08:00
Steveice10
4f9fc88bb3 apt: Improve accuracy of applet slot states on system applet launch. (#7456) 2024-02-23 16:18:16 -08:00
GPUCode
d857743075 Downgrade blend factor crash to warning (#7459)
* pica_to_vk: Downgrade assert to warning

* pica_to_gl: Downgrade unreachable to warning
2024-02-22 15:43:44 -08:00
kylon
b5042a5257 Core: update kernel config memory to latest 11.17 (#7460) 2024-02-22 15:43:33 -08:00
Wunk
e524542a40 vk_texture_runtime: Use boost-static_vector (#7455)
* vk_texture_runtime: Use boost-`static_vector` for image init-barriers

Uses `static_vector` rather than `std::array`+`u32` when passing input
parameters into the initialization barriers.

* vk_texture_runtime: Use boost-`static_vector` for framebuffer attachments

* vk_texture_runtime: Use boost-`static_vector` for surface uploads
2024-02-22 02:35:57 +02:00
Steveice10
3a4ebb1413 file_util: Make sure portable user path is absolute. (#7448) 2024-02-18 15:21:53 -08:00
Steveice10
cbe8987036 ci: Update action versions. (#7449) 2024-02-18 08:23:15 -08:00
Charles Lombardo
da5aa70fc9 android: Port yuzu system info logging (#7431) 2024-02-17 20:10:10 -08:00
Castor215
749a721aa2 externals: disable system cpp-httplib if it is a shared object (#7446)
Co-authored-by: Castor216 <davidjamescastor215@proton.me>
2024-02-17 06:39:38 -08:00
SachinVin
bb003c2bd4 audio_core\hle\source.cpp: Improve accuracy of SourceStatus (#7432) 2024-02-17 02:12:54 +01:00
Tobias
7638f87f74 Port several small multiplayer PRs from yuzu (#7419)
* yuzu: Use displayed port on direct connect

* Color player counts in the multiplayer public lobby list

- Full lobbies have their player count displayed in red.
- Lobbies with one slot left have their player count displayed in orange.
- Empty lobbies have their player count grayed out.

* Add hotkeys for multiplayer actions

Default shortcuts were chosen as to be intuitive (use the first letter
of the action, or the second word's first letter) and work on all
types of keyboards. The hotkeys can be used while playing a game too,
as they are application-wide.

* Persist filters in multiplayer public lobby list

After connecting to a room, the chosen filter text, "Games I Own",
"Hide Empty Rooms" and "Hide Full Rooms" values are persisted
to configuration so they are preserved across restarts.

This makes it easier to rejoin a room if you regularly play the same
game, or after a crash.

* citra_qt/lobby: Fix multiplayer player count color in dark theme

Co-Authored-By: Kevnkkm <56404895+kevnkkm@users.noreply.github.com>

* Address review comments

---------

Co-authored-by: Narr the Reg <juangerman-13@hotmail.com>
Co-authored-by: Hugo Locurcio <hugo.locurcio@hugo.pro>
Co-authored-by: Kevnkkm <56404895+kevnkkm@users.noreply.github.com>
2024-02-16 04:34:10 -08:00
Steveice10
aa6809e2a8 renderer_vulkan: Use no more than target supported version. (#7439) 2024-02-15 19:38:32 -08:00
Steveice10
5e02be75a3 renderer_vulkan: Use getToolPropertiesEXT instead of getToolProperties (#7434)
getToolProperties is not available until Vulkan 1.3; we need to use the EXT version.
2024-02-13 21:43:09 -08:00
GPUCode
0c9037f075 renderer_vulkan: Rewrite descriptor management
* Switch to batched vkUpdateDescriptorSets from cached descriptor sets with templates
2024-02-12 00:08:47 +02:00
GPUCode
4a63fc2ca2 renderer_vulkan: Rename renderpass cache to render manager
* It is no longer just a cache
2024-02-11 12:32:18 +02:00
GPUCode
9f5c8d0e2f renderer_vulkan: Remove vulkan prefix in SetObjectName 2024-02-11 12:32:08 +02:00
GPUCode
2bcbfeb861 vk_master_semaphore: Remove waitable atomic
* These are buggy on some platforms and regular condition_variables are faster most of the time
2024-02-11 12:31:59 +02:00
Tobias
b9c9beeee5 android: add basic support for google game dashboard (#7430)
This adds support for the Performance and Battery Saver modes in the Game Dashboard mostly found on Google Pixel devices.
This does not yet define the specifics for the performance modes but does provide the initial basic support.

Co-authored-by: Emma <153868115+gaypotatoemma@users.noreply.github.com>
2024-02-10 17:24:10 -08:00
GPUCode
de993dcfbd service: Stub mcu::HWC (#7428) 2024-02-09 14:09:05 -08:00
oltolm
3c9157b1ec fix ASAN error in sdl_impl.cpp (#7427) 2024-02-09 14:08:15 -08:00
Ishan09811
0c40c10022 Update Android Deps (#7383) 2024-02-09 07:24:55 -05:00
Daniel López Guimaraes
2766118e33 http: Implement various missing commands (#7415) 2024-02-08 11:01:46 -08:00
Steveice10
06b26691ba soc: Pass accurate sockaddr length to socket functions. (#7426) 2024-02-08 11:01:38 -08:00
PabloMK7
d41ce64f7b Add ipv6 socket support (#7418)
* Add IPV6 socket support

* Suggestions
2024-02-07 19:22:44 -08:00
Tobias
1165a708d5 .tx/config: Use language mappings for android "tx pull" (#7422)
The language names we are using in the android resources differ from those on Transifex.

We need to manually specify mappings for them, so Transifex is able to place the files in the correct folders.
2024-02-07 05:41:29 -08:00
Steveice10
19784355f9 build: Improve support for Windows cross-compilation. (#7389)
* build: Improve support for Windows cross-compilation.

* build: Move linuxdeploy download to bundle target execution time.
2024-02-05 10:09:50 -08:00
SachinVin
aa6a29d7e1 AudioCore/HLE/source: Partially implement last_buffer_id (#7397)
* AudioCore/HLE/source: Partially implement last_buffer_id

shared_memory.h: fix typo

* tests\audio_core\hle\source.cpp: Add test cases to verify last_buffer_id
2024-02-05 09:54:13 -08:00
GPUCode
106364e01e video_core: Use source3 when GPU_PREVIOUS is used in first stage (#7411) 2024-02-05 09:53:54 -08:00
GPUCode
d5a1bd07f3 glsl_shader_gen: Increase z=0 epsillon (#7408) 2024-02-05 09:53:41 -08:00
Steveice10
8afa27718c dumpkeys: Add seeddb.bin to output files. (#7417) 2024-02-05 09:14:14 -08:00
107 changed files with 3403 additions and 1447 deletions

View File

@@ -12,13 +12,13 @@ jobs:
if: ${{ !github.head_ref }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
submodules: recursive
- name: Pack
run: ./.ci/source.sh
- name: Upload
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: source
path: artifacts/
@@ -37,11 +37,11 @@ jobs:
OS: linux
TARGET: ${{ matrix.target }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
submodules: recursive
- name: Set up cache
uses: actions/cache@v3
uses: actions/cache@v4
with:
path: ${{ env.CCACHE_DIR }}
key: ${{ runner.os }}-${{ matrix.target }}-${{ github.sha }}
@@ -53,7 +53,7 @@ jobs:
run: ./.ci/pack.sh
if: ${{ matrix.target == 'appimage' }}
- name: Upload
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: ${{ matrix.target == 'appimage' }}
with:
name: ${{ env.OS }}-${{ env.TARGET }}
@@ -70,11 +70,11 @@ jobs:
OS: macos
TARGET: ${{ matrix.target }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
submodules: recursive
- name: Set up cache
uses: actions/cache@v3
uses: actions/cache@v4
with:
path: ${{ env.CCACHE_DIR }}
key: ${{ runner.os }}-${{ matrix.target }}-${{ github.sha }}
@@ -87,7 +87,7 @@ jobs:
- name: Prepare outputs for caching
run: mv build/bundle $OS-$TARGET
- name: Cache outputs for universal build
uses: actions/cache/save@v3
uses: actions/cache/save@v4
with:
path: ${{ env.OS }}-${{ env.TARGET }}
key: ${{ runner.os }}-${{ matrix.target }}-${{ github.sha }}-${{ github.run_id }}-${{ github.run_attempt }}
@@ -98,15 +98,15 @@ jobs:
OS: macos
TARGET: universal
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Download x86_64 build from cache
uses: actions/cache/restore@v3
uses: actions/cache/restore@v4
with:
path: ${{ env.OS }}-x86_64
key: ${{ runner.os }}-x86_64-${{ github.sha }}-${{ github.run_id }}-${{ github.run_attempt }}
fail-on-cache-miss: true
- name: Download ARM64 build from cache
uses: actions/cache/restore@v3
uses: actions/cache/restore@v4
with:
path: ${{ env.OS }}-arm64
key: ${{ runner.os }}-arm64-${{ github.sha }}-${{ github.run_id }}-${{ github.run_attempt }}
@@ -118,7 +118,7 @@ jobs:
- name: Pack
run: ./.ci/pack.sh
- name: Upload
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ${{ env.OS }}-${{ env.TARGET }}
path: artifacts/
@@ -137,11 +137,11 @@ jobs:
OS: windows
TARGET: ${{ matrix.target }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
submodules: recursive
- name: Set up cache
uses: actions/cache@v3
uses: actions/cache@v4
with:
path: ${{ env.CCACHE_DIR }}
key: ${{ runner.os }}-${{ matrix.target }}-${{ github.sha }}
@@ -179,7 +179,7 @@ jobs:
- name: Pack
run: ./.ci/pack.sh
- name: Upload
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ${{ env.OS }}-${{ env.TARGET }}
path: artifacts/
@@ -192,11 +192,11 @@ jobs:
OS: android
TARGET: universal
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
submodules: recursive
- name: Set up cache
uses: actions/cache@v3
uses: actions/cache@v4
with:
path: |
~/.gradle/caches
@@ -228,7 +228,7 @@ jobs:
env:
UNPACKED: 1
- name: Upload
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ${{ env.OS }}-${{ env.TARGET }}
path: src/android/app/artifacts/
@@ -242,11 +242,11 @@ jobs:
OS: ios
TARGET: arm64
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
submodules: recursive
- name: Set up cache
uses: actions/cache@v3
uses: actions/cache@v4
with:
path: ${{ env.CCACHE_DIR }}
key: ${{ runner.os }}-ios-${{ github.sha }}
@@ -261,7 +261,7 @@ jobs:
needs: [windows, linux, macos-universal, android, source]
if: ${{ startsWith(github.ref, 'refs/tags/') }}
steps:
- uses: actions/download-artifact@v3
- uses: actions/download-artifact@v4
- name: Create release
uses: actions/create-release@v1
env:

View File

@@ -13,7 +13,7 @@ jobs:
image: citraemu/build-environments:linux-fresh
options: -u 1001
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build

View File

@@ -20,11 +20,11 @@ jobs:
if: ${{ github.event.inputs.nightly != 'false' && github.repository == 'citra-emu/citra' }}
steps:
# this checkout is required to make sure the GitHub Actions scripts are available
- uses: actions/checkout@v3
- uses: actions/checkout@v4
name: Pre-checkout
with:
submodules: false
- uses: actions/github-script@v6
- uses: actions/github-script@v7
id: check-changes
name: 'Check for new changes'
env:
@@ -38,7 +38,7 @@ jobs:
return checkBaseChanges(github, context);
- run: npm install execa@5
if: ${{ steps.check-changes.outputs.result == 'true' }}
- uses: actions/checkout@v3
- uses: actions/checkout@v4
name: Checkout
if: ${{ steps.check-changes.outputs.result == 'true' }}
with:
@@ -46,7 +46,7 @@ jobs:
fetch-depth: 0
submodules: true
token: ${{ secrets.ALT_GITHUB_TOKEN }}
- uses: actions/github-script@v6
- uses: actions/github-script@v7
name: 'Update and tag new commits'
if: ${{ steps.check-changes.outputs.result == 'true' }}
env:
@@ -62,11 +62,11 @@ jobs:
if: ${{ github.event.inputs.canary != 'false' && github.repository == 'citra-emu/citra' }}
steps:
# this checkout is required to make sure the GitHub Actions scripts are available
- uses: actions/checkout@v3
- uses: actions/checkout@v4
name: Pre-checkout
with:
submodules: false
- uses: actions/github-script@v6
- uses: actions/github-script@v7
id: check-changes
name: 'Check for new changes'
env:
@@ -79,7 +79,7 @@ jobs:
return checkCanaryChanges(github, context);
- run: npm install execa@5
if: ${{ steps.check-changes.outputs.result == 'true' }}
- uses: actions/checkout@v3
- uses: actions/checkout@v4
name: Checkout
if: ${{ steps.check-changes.outputs.result == 'true' }}
with:
@@ -87,7 +87,7 @@ jobs:
fetch-depth: 0
submodules: true
token: ${{ secrets.ALT_GITHUB_TOKEN }}
- uses: actions/github-script@v6
- uses: actions/github-script@v7
name: 'Check and merge canary changes'
if: ${{ steps.check-changes.outputs.result == 'true' }}
env:

View File

@@ -10,7 +10,7 @@ jobs:
container: citraemu/build-environments:linux-fresh
if: ${{ github.repository == 'citra-emu/citra' }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
submodules: recursive
fetch-depth: 0

View File

@@ -85,8 +85,6 @@ option(ENABLE_VULKAN "Enables the Vulkan renderer" ON)
option(USE_DISCORD_PRESENCE "Enables Discord Rich Presence" OFF)
CMAKE_DEPENDENT_OPTION(CITRA_ENABLE_BUNDLE_TARGET "Enable the distribution bundling target." ON "NOT ANDROID AND NOT IOS" OFF)
# Compile options
CMAKE_DEPENDENT_OPTION(COMPILE_WITH_DWARF "Add DWARF debugging information" ${IS_DEBUG_BUILD} "MINGW" OFF)
option(ENABLE_LTO "Enable link time optimization" ${DEFAULT_ENABLE_LTO})
@@ -249,6 +247,26 @@ if (ENABLE_QT)
if (ENABLE_QT_TRANSLATION)
find_package(Qt6 REQUIRED COMPONENTS LinguistTools)
endif()
if (NOT DEFINED QT_TARGET_PATH)
# Determine the location of the compile target's Qt.
get_target_property(qtcore_path Qt6::Core LOCATION_Release)
string(FIND "${qtcore_path}" "/bin/" qtcore_path_bin_pos REVERSE)
string(FIND "${qtcore_path}" "/lib/" qtcore_path_lib_pos REVERSE)
if (qtcore_path_bin_pos GREATER qtcore_path_lib_pos)
string(SUBSTRING "${qtcore_path}" 0 ${qtcore_path_bin_pos} QT_TARGET_PATH)
else()
string(SUBSTRING "${qtcore_path}" 0 ${qtcore_path_lib_pos} QT_TARGET_PATH)
endif()
endif()
if (NOT DEFINED QT_HOST_PATH)
# Use the same for host Qt if none is defined.
set(QT_HOST_PATH "${QT_TARGET_PATH}")
endif()
message(STATUS "Using target Qt at ${QT_TARGET_PATH}")
message(STATUS "Using host Qt at ${QT_HOST_PATH}")
endif()
# Use system tsl::robin_map if available (otherwise we fallback to version bundled with dynarmic)
@@ -424,7 +442,8 @@ else()
endif()
# Create target for outputting distributable bundles.
if (CITRA_ENABLE_BUNDLE_TARGET)
# Not supported for mobile platforms as distributables are built differently.
if (NOT ANDROID AND NOT IOS)
include(BundleTarget)
if (ENABLE_SDL2_FRONTEND)
bundle_target(citra)

View File

@@ -2,37 +2,104 @@
if (BUNDLE_TARGET_EXECUTE)
# --- Bundling method logic ---
function(symlink_safe_copy from to)
if (WIN32)
# Use cmake copy for maximum compatibility.
execute_process(COMMAND ${CMAKE_COMMAND} -E copy "${from}" "${to}"
RESULT_VARIABLE cp_result)
else()
# Use native copy to turn symlinks into normal files.
execute_process(COMMAND cp -L "${from}" "${to}"
RESULT_VARIABLE cp_result)
endif()
if (NOT cp_result EQUAL "0")
message(FATAL_ERROR "cp \"${from}\" \"${to}\" failed: ${cp_result}")
endif()
endfunction()
function(bundle_qt executable_path)
if (WIN32)
# Perform standalone bundling first to copy over all used libraries, as windeployqt does not do this.
bundle_standalone("${executable_path}" "${EXECUTABLE_PATH}" "${BUNDLE_LIBRARY_PATHS}")
get_filename_component(executable_parent_dir "${executable_path}" DIRECTORY)
find_program(windeployqt_executable windeployqt6)
# Create a qt.conf file pointing to the app directory.
# This ensures Qt can find its plugins.
file(WRITE "${executable_parent_dir}/qt.conf" "[Paths]\nprefix = .")
file(WRITE "${executable_parent_dir}/qt.conf" "[Paths]\nPrefix = .")
find_program(windeployqt_executable windeployqt6 PATHS "${QT_HOST_PATH}/bin")
find_program(qtpaths_executable qtpaths6 PATHS "${QT_HOST_PATH}/bin")
# TODO: Hack around windeployqt's poor cross-compilation support by
# TODO: making a local copy with a prefix pointing to the target Qt.
if (NOT "${QT_HOST_PATH}" STREQUAL "${QT_TARGET_PATH}")
set(windeployqt_dir "${BINARY_PATH}/windeployqt_copy")
file(MAKE_DIRECTORY "${windeployqt_dir}")
symlink_safe_copy("${windeployqt_executable}" "${windeployqt_dir}/windeployqt.exe")
symlink_safe_copy("${qtpaths_executable}" "${windeployqt_dir}/qtpaths.exe")
symlink_safe_copy("${QT_HOST_PATH}/bin/Qt6Core.dll" "${windeployqt_dir}")
if (EXISTS "${QT_TARGET_PATH}/share")
# Unix-style Qt; we need to wire up the paths manually.
file(WRITE "${windeployqt_dir}/qt.conf" "\
[Paths]\n
Prefix = ${QT_TARGET_PATH}\n \
ArchData = ${QT_TARGET_PATH}/share/qt6\n \
Binaries = ${QT_TARGET_PATH}/bin\n \
Data = ${QT_TARGET_PATH}/share/qt6\n \
Documentation = ${QT_TARGET_PATH}/share/qt6/doc\n \
Headers = ${QT_TARGET_PATH}/include/qt6\n \
Libraries = ${QT_TARGET_PATH}/lib\n \
LibraryExecutables = ${QT_TARGET_PATH}/share/qt6/bin\n \
Plugins = ${QT_TARGET_PATH}/share/qt6/plugins\n \
QmlImports = ${QT_TARGET_PATH}/share/qt6/qml\n \
Translations = ${QT_TARGET_PATH}/share/qt6/translations\n \
")
else()
# Windows-style Qt; the defaults should suffice.
file(WRITE "${windeployqt_dir}/qt.conf" "[Paths]\nPrefix = ${QT_TARGET_PATH}")
endif()
set(windeployqt_executable "${windeployqt_dir}/windeployqt.exe")
set(qtpaths_executable "${windeployqt_dir}/qtpaths.exe")
endif()
message(STATUS "Executing windeployqt for executable ${executable_path}")
execute_process(COMMAND "${windeployqt_executable}" "${executable_path}"
--qtpaths "${qtpaths_executable}"
--no-compiler-runtime --no-system-d3d-compiler --no-opengl-sw --no-translations
--plugindir "${executable_parent_dir}/plugins")
--plugindir "${executable_parent_dir}/plugins"
RESULT_VARIABLE windeployqt_result)
if (NOT windeployqt_result EQUAL "0")
message(FATAL_ERROR "windeployqt failed: ${windeployqt_result}")
endif()
# Remove the FFmpeg multimedia plugin as we don't include FFmpeg.
# We want to use the Windows media plugin instead, which is also included.
file(REMOVE "${executable_parent_dir}/plugins/multimedia/ffmpegmediaplugin.dll")
elseif (APPLE)
get_filename_component(executable_name "${executable_path}" NAME_WE)
find_program(MACDEPLOYQT_EXECUTABLE macdeployqt6)
find_program(macdeployqt_executable macdeployqt6 PATHS "${QT_HOST_PATH}/bin")
message(STATUS "Executing macdeployqt for executable ${executable_path}")
message(STATUS "Executing macdeployqt at \"${macdeployqt_executable}\" for executable \"${executable_path}\"")
execute_process(
COMMAND "${MACDEPLOYQT_EXECUTABLE}"
COMMAND "${macdeployqt_executable}"
"${executable_path}"
"-executable=${executable_path}/Contents/MacOS/${executable_name}"
-always-overwrite)
-always-overwrite
RESULT_VARIABLE macdeployqt_result)
if (NOT macdeployqt_result EQUAL "0")
message(FATAL_ERROR "macdeployqt failed: ${macdeployqt_result}")
endif()
# Bundling libraries can rewrite path information and break code signatures of system libraries.
# Perform an ad-hoc re-signing on the whole app bundle to fix this.
execute_process(COMMAND codesign --deep -fs - "${executable_path}")
execute_process(COMMAND codesign --deep -fs - "${executable_path}"
RESULT_VARIABLE codesign_result)
if (NOT codesign_result EQUAL "0")
message(FATAL_ERROR "codesign failed: ${codesign_result}")
endif()
else()
message(FATAL_ERROR "Unsupported OS for Qt bundling.")
endif()
@@ -44,9 +111,9 @@ if (BUNDLE_TARGET_EXECUTE)
if (enable_qt)
# Find qmake to make sure the plugin uses the right version of Qt.
find_program(QMAKE_EXECUTABLE qmake6)
find_program(qmake_executable qmake6 PATHS "${QT_HOST_PATH}/bin")
set(extra_linuxdeploy_env "QMAKE=${QMAKE_EXECUTABLE}")
set(extra_linuxdeploy_env "QMAKE=${qmake_executable}")
set(extra_linuxdeploy_args --plugin qt)
endif()
@@ -59,7 +126,11 @@ if (BUNDLE_TARGET_EXECUTE)
--executable "${executable_path}"
--icon-file "${source_path}/dist/citra.svg"
--desktop-file "${source_path}/dist/${executable_name}.desktop"
--appdir "${appdir_path}")
--appdir "${appdir_path}"
RESULT_VARIABLE linuxdeploy_appdir_result)
if (NOT linuxdeploy_appdir_result EQUAL "0")
message(FATAL_ERROR "linuxdeploy failed to create AppDir: ${linuxdeploy_appdir_result}")
endif()
if (enable_qt)
set(qt_hook_file "${appdir_path}/apprun-hooks/linuxdeploy-plugin-qt-hook.sh")
@@ -82,7 +153,11 @@ if (BUNDLE_TARGET_EXECUTE)
"OUTPUT=${bundle_dir}/${executable_name}.AppImage"
"${linuxdeploy_executable}"
--output appimage
--appdir "${appdir_path}")
--appdir "${appdir_path}"
RESULT_VARIABLE linuxdeploy_appimage_result)
if (NOT linuxdeploy_appimage_result EQUAL "0")
message(FATAL_ERROR "linuxdeploy failed to create AppImage: ${linuxdeploy_appimage_result}")
endif()
endfunction()
function(bundle_standalone executable_path original_executable_path bundle_library_paths)
@@ -109,16 +184,23 @@ if (BUNDLE_TARGET_EXECUTE)
file(MAKE_DIRECTORY ${lib_dir})
foreach (lib_file IN LISTS resolved_deps)
message(STATUS "Bundling library ${lib_file}")
# Use native copy to turn symlinks into normal files.
execute_process(COMMAND cp -L "${lib_file}" "${lib_dir}")
symlink_safe_copy("${lib_file}" "${lib_dir}")
endforeach()
endif()
# Add libs directory to executable rpath where applicable.
if (APPLE)
execute_process(COMMAND install_name_tool -add_rpath "@loader_path/libs" "${executable_path}")
execute_process(COMMAND install_name_tool -add_rpath "@loader_path/libs" "${executable_path}"
RESULT_VARIABLE install_name_tool_result)
if (NOT install_name_tool_result EQUAL "0")
message(FATAL_ERROR "install_name_tool failed: ${install_name_tool_result}")
endif()
elseif (UNIX)
execute_process(COMMAND patchelf --set-rpath '$ORIGIN/../libs' "${executable_path}")
execute_process(COMMAND patchelf --set-rpath '$ORIGIN/../libs' "${executable_path}"
RESULT_VARIABLE patchelf_result)
if (NOT patchelf_result EQUAL "0")
message(FATAL_ERROR "patchelf failed: ${patchelf_result}")
endif()
endif()
endfunction()
@@ -127,7 +209,7 @@ if (BUNDLE_TARGET_EXECUTE)
set(bundle_dir ${BINARY_PATH}/bundle)
# On Linux, always bundle an AppImage.
if (DEFINED LINUXDEPLOY)
if (CMAKE_HOST_SYSTEM_NAME STREQUAL "Linux")
if (IN_PLACE)
message(FATAL_ERROR "Cannot bundle for Linux in-place.")
endif()
@@ -146,14 +228,12 @@ if (BUNDLE_TARGET_EXECUTE)
if (BUNDLE_QT)
bundle_qt("${bundled_executable_path}")
endif()
if (WIN32 OR NOT BUNDLE_QT)
else()
bundle_standalone("${bundled_executable_path}" "${EXECUTABLE_PATH}" "${BUNDLE_LIBRARY_PATHS}")
endif()
endif()
else()
# --- Bundling target creation logic ---
elseif (BUNDLE_TARGET_DOWNLOAD_LINUXDEPLOY)
# --- linuxdeploy download logic ---
# Downloads and extracts a linuxdeploy component.
function(download_linuxdeploy_component base_dir name executable_name)
@@ -161,7 +241,7 @@ else()
if (NOT EXISTS "${executable_file}")
message(STATUS "Downloading ${executable_name}")
file(DOWNLOAD
"https://github.com/linuxdeploy/${name}/releases/download/continuous/${executable_name}"
"https://github.com/${name}/releases/download/continuous/${executable_name}"
"${executable_file}" SHOW_PROGRESS)
file(CHMOD "${executable_file}" PERMISSIONS OWNER_READ OWNER_WRITE OWNER_EXECUTE)
@@ -170,7 +250,11 @@ else()
message(STATUS "Extracting ${executable_name}")
execute_process(
COMMAND "${executable_file}" --appimage-extract
WORKING_DIRECTORY "${base_dir}")
WORKING_DIRECTORY "${base_dir}"
RESULT_VARIABLE extract_result)
if (NOT extract_result EQUAL "0")
message(FATAL_ERROR "AppImage extract failed: ${extract_result}")
endif()
else()
message(STATUS "Copying ${executable_name}")
file(COPY "${executable_file}" DESTINATION "${base_dir}/squashfs-root/usr/bin/")
@@ -178,89 +262,102 @@ else()
endif()
endfunction()
# Download plugins first so they don't overwrite linuxdeploy's AppRun file.
download_linuxdeploy_component("${LINUXDEPLOY_PATH}" "linuxdeploy/linuxdeploy-plugin-qt" "linuxdeploy-plugin-qt-${LINUXDEPLOY_ARCH}.AppImage")
download_linuxdeploy_component("${LINUXDEPLOY_PATH}" "darealshinji/linuxdeploy-plugin-checkrt" "linuxdeploy-plugin-checkrt.sh")
download_linuxdeploy_component("${LINUXDEPLOY_PATH}" "linuxdeploy/linuxdeploy" "linuxdeploy-${LINUXDEPLOY_ARCH}.AppImage")
else()
# --- Bundling target creation logic ---
# Creates the base bundle target with common files and pre-bundle steps.
function(create_base_bundle_target)
message(STATUS "Creating base bundle target")
add_custom_target(bundle)
add_custom_command(
TARGET bundle
COMMAND ${CMAKE_COMMAND} -E make_directory "${CMAKE_BINARY_DIR}/bundle/")
add_custom_command(
TARGET bundle
COMMAND ${CMAKE_COMMAND} -E make_directory "${CMAKE_BINARY_DIR}/bundle/dist/")
add_custom_command(
TARGET bundle
COMMAND ${CMAKE_COMMAND} -E copy "${CMAKE_SOURCE_DIR}/dist/icon.png" "${CMAKE_BINARY_DIR}/bundle/dist/citra.png")
add_custom_command(
TARGET bundle
COMMAND ${CMAKE_COMMAND} -E copy "${CMAKE_SOURCE_DIR}/license.txt" "${CMAKE_BINARY_DIR}/bundle/")
add_custom_command(
TARGET bundle
COMMAND ${CMAKE_COMMAND} -E copy "${CMAKE_SOURCE_DIR}/README.md" "${CMAKE_BINARY_DIR}/bundle/")
add_custom_command(
TARGET bundle
COMMAND ${CMAKE_COMMAND} -E copy_directory "${CMAKE_SOURCE_DIR}/dist/scripting" "${CMAKE_BINARY_DIR}/bundle/scripting")
# On Linux, add a command to prepare linuxdeploy and any required plugins before any bundling occurs.
if (CMAKE_HOST_SYSTEM_NAME STREQUAL "Linux")
add_custom_command(
TARGET bundle
COMMAND ${CMAKE_COMMAND}
"-DBUNDLE_TARGET_DOWNLOAD_LINUXDEPLOY=1"
"-DLINUXDEPLOY_PATH=${CMAKE_BINARY_DIR}/externals/linuxdeploy"
"-DLINUXDEPLOY_ARCH=${CMAKE_HOST_SYSTEM_PROCESSOR}"
-P "${CMAKE_SOURCE_DIR}/CMakeModules/BundleTarget.cmake"
WORKING_DIRECTORY "${CMAKE_BINARY_DIR}")
endif()
endfunction()
# Adds a target to the bundle target, packing in required libraries.
# If in_place is true, the bundling will be done in-place as part of the specified target.
function(bundle_target_internal target_name in_place)
# Create base bundle target if it does not exist.
if (NOT in_place AND NOT TARGET bundle)
message(STATUS "Creating base bundle target")
add_custom_target(bundle)
add_custom_command(
TARGET bundle
COMMAND ${CMAKE_COMMAND} -E make_directory "${CMAKE_BINARY_DIR}/bundle/")
add_custom_command(
TARGET bundle
COMMAND ${CMAKE_COMMAND} -E make_directory "${CMAKE_BINARY_DIR}/bundle/dist/")
add_custom_command(
TARGET bundle
COMMAND ${CMAKE_COMMAND} -E copy "${CMAKE_SOURCE_DIR}/dist/icon.png" "${CMAKE_BINARY_DIR}/bundle/dist/citra.png")
add_custom_command(
TARGET bundle
COMMAND ${CMAKE_COMMAND} -E copy "${CMAKE_SOURCE_DIR}/license.txt" "${CMAKE_BINARY_DIR}/bundle/")
add_custom_command(
TARGET bundle
COMMAND ${CMAKE_COMMAND} -E copy "${CMAKE_SOURCE_DIR}/README.md" "${CMAKE_BINARY_DIR}/bundle/")
add_custom_command(
TARGET bundle
COMMAND ${CMAKE_COMMAND} -E copy_directory "${CMAKE_SOURCE_DIR}/dist/scripting" "${CMAKE_BINARY_DIR}/bundle/scripting")
create_base_bundle_target()
endif()
set(BUNDLE_EXECUTABLE_PATH "$<TARGET_FILE:${target_name}>")
set(bundle_executable_path "$<TARGET_FILE:${target_name}>")
if (target_name MATCHES ".*qt")
set(BUNDLE_QT ON)
set(bundle_qt ON)
if (APPLE)
# For Qt targets on Apple, expect an app bundle.
set(BUNDLE_EXECUTABLE_PATH "$<TARGET_BUNDLE_DIR:${target_name}>")
set(bundle_executable_path "$<TARGET_BUNDLE_DIR:${target_name}>")
endif()
else()
set(BUNDLE_QT OFF)
set(bundle_qt OFF)
endif()
# Build a list of library search paths from prefix paths.
foreach(prefix_path IN LISTS CMAKE_PREFIX_PATH CMAKE_SYSTEM_PREFIX_PATH)
foreach(prefix_path IN LISTS CMAKE_FIND_ROOT_PATH CMAKE_PREFIX_PATH CMAKE_SYSTEM_PREFIX_PATH)
if (WIN32)
list(APPEND BUNDLE_LIBRARY_PATHS "${prefix_path}/bin")
list(APPEND bundle_library_paths "${prefix_path}/bin")
endif()
list(APPEND BUNDLE_LIBRARY_PATHS "${prefix_path}/lib")
list(APPEND bundle_library_paths "${prefix_path}/lib")
endforeach()
foreach(library_path IN LISTS CMAKE_SYSTEM_LIBRARY_PATH)
list(APPEND BUNDLE_LIBRARY_PATHS "${library_path}")
list(APPEND bundle_library_paths "${library_path}")
endforeach()
# On Linux, prepare linuxdeploy and any required plugins.
if (CMAKE_SYSTEM_NAME STREQUAL "Linux")
set(LINUXDEPLOY_BASE "${CMAKE_BINARY_DIR}/externals/linuxdeploy")
# Download plugins first so they don't overwrite linuxdeploy's AppRun file.
download_linuxdeploy_component("${LINUXDEPLOY_BASE}" "linuxdeploy-plugin-qt" "linuxdeploy-plugin-qt-x86_64.AppImage")
download_linuxdeploy_component("${LINUXDEPLOY_BASE}" "linuxdeploy-plugin-checkrt" "linuxdeploy-plugin-checkrt-x86_64.sh")
download_linuxdeploy_component("${LINUXDEPLOY_BASE}" "linuxdeploy" "linuxdeploy-x86_64.AppImage")
set(EXTRA_BUNDLE_ARGS "-DLINUXDEPLOY=${LINUXDEPLOY_BASE}/squashfs-root/AppRun")
endif()
if (in_place)
message(STATUS "Adding in-place bundling to ${target_name}")
set(DEST_TARGET ${target_name})
set(dest_target ${target_name})
else()
message(STATUS "Adding ${target_name} to bundle target")
set(DEST_TARGET bundle)
set(dest_target bundle)
add_dependencies(bundle ${target_name})
endif()
add_custom_command(TARGET ${DEST_TARGET} POST_BUILD
add_custom_command(TARGET ${dest_target} POST_BUILD
COMMAND ${CMAKE_COMMAND}
"-DCMAKE_PREFIX_PATH=\"${CMAKE_PREFIX_PATH}\""
"-DQT_HOST_PATH=\"${QT_HOST_PATH}\""
"-DQT_TARGET_PATH=\"${QT_TARGET_PATH}\""
"-DBUNDLE_TARGET_EXECUTE=1"
"-DTARGET=${target_name}"
"-DSOURCE_PATH=${CMAKE_SOURCE_DIR}"
"-DBINARY_PATH=${CMAKE_BINARY_DIR}"
"-DEXECUTABLE_PATH=${BUNDLE_EXECUTABLE_PATH}"
"-DBUNDLE_LIBRARY_PATHS=\"${BUNDLE_LIBRARY_PATHS}\""
"-DBUNDLE_QT=${BUNDLE_QT}"
"-DEXECUTABLE_PATH=${bundle_executable_path}"
"-DBUNDLE_LIBRARY_PATHS=\"${bundle_library_paths}\""
"-DBUNDLE_QT=${bundle_qt}"
"-DIN_PLACE=${in_place}"
${EXTRA_BUNDLE_ARGS}
"-DLINUXDEPLOY=${CMAKE_BINARY_DIR}/externals/linuxdeploy/squashfs-root/AppRun"
-P "${CMAKE_SOURCE_DIR}/CMakeModules/BundleTarget.cmake"
WORKING_DIRECTORY "${CMAKE_BINARY_DIR}")
endfunction()

View File

@@ -1,21 +1,20 @@
set(CURRENT_MODULE_DIR ${CMAKE_CURRENT_LIST_DIR})
# This function downloads Qt using aqt. The path of the downloaded content will be added to the CMAKE_PREFIX_PATH.
# Params:
# target: Qt dependency to install. Specify a version number to download Qt, or "tools_(name)" for a specific build tool.
function(download_qt target)
# Determines parameters based on the host and target for downloading the right Qt binaries.
function(determine_qt_parameters target host_out type_out arch_out arch_path_out host_type_out host_arch_out host_arch_path_out)
if (target MATCHES "tools_.*")
set(DOWNLOAD_QT_TOOL ON)
set(tool ON)
else()
set(DOWNLOAD_QT_TOOL OFF)
set(tool OFF)
endif()
# Determine installation parameters for OS, architecture, and compiler
if (WIN32)
set(host "windows")
set(type "desktop")
if (NOT DOWNLOAD_QT_TOOL)
if (NOT tool)
if (MINGW)
set(arch "win64_mingw")
set(arch_path "mingw_64")
@@ -28,21 +27,35 @@ function(download_qt target)
message(FATAL_ERROR "Unsupported bundled Qt architecture. Enable USE_SYSTEM_QT and provide your own.")
endif()
set(arch "win64_${arch_path}")
# In case we're cross-compiling, prepare to also fetch the correct host Qt tools.
if (CMAKE_HOST_SYSTEM_PROCESSOR STREQUAL "AMD64")
set(host_arch_path "msvc2019_64")
elseif (CMAKE_HOST_SYSTEM_PROCESSOR STREQUAL "ARM64")
# TODO: msvc2019_arm64 doesn't include some of the required tools for some reason,
# TODO: so until it does, just use msvc2019_64 under x86_64 emulation.
# set(host_arch_path "msvc2019_arm64")
set(host_arch_path "msvc2019_64")
endif()
set(host_arch "win64_${host_arch_path}")
else()
message(FATAL_ERROR "Unsupported bundled Qt toolchain. Enable USE_SYSTEM_QT and provide your own.")
endif()
endif()
elseif (APPLE)
set(host "mac")
if (IOS AND NOT DOWNLOAD_QT_TOOL)
set(type "desktop")
set(arch "clang_64")
set(arch_path "macos")
if (IOS AND NOT tool)
set(host_type "${type}")
set(host_arch "${arch}")
set(host_arch_path "${arch_path}")
set(type "ios")
set(arch "ios")
set(arch_path "ios")
set(host_arch_path "macos")
else()
set(type "desktop")
set(arch "clang_64")
set(arch_path "macos")
endif()
else()
set(host "linux")
@@ -51,38 +64,64 @@ function(download_qt target)
set(arch_path "linux")
endif()
get_external_prefix(qt base_path)
file(MAKE_DIRECTORY "${base_path}")
set(${host_out} "${host}" PARENT_SCOPE)
set(${type_out} "${type}" PARENT_SCOPE)
set(${arch_out} "${arch}" PARENT_SCOPE)
set(${arch_path_out} "${arch_path}" PARENT_SCOPE)
if (DEFINED host_type)
set(${host_type_out} "${host_type}" PARENT_SCOPE)
else()
set(${host_type_out} "${type}" PARENT_SCOPE)
endif()
if (DEFINED host_arch)
set(${host_arch_out} "${host_arch}" PARENT_SCOPE)
else()
set(${host_arch_out} "${arch}" PARENT_SCOPE)
endif()
if (DEFINED host_arch_path)
set(${host_arch_path_out} "${host_arch_path}" PARENT_SCOPE)
else()
set(${host_arch_path_out} "${arch_path}" PARENT_SCOPE)
endif()
endfunction()
# Download Qt binaries for a specifc configuration.
function(download_qt_configuration prefix_out target host type arch arch_path base_path)
if (target MATCHES "tools_.*")
set(tool ON)
else()
set(tool OFF)
endif()
set(install_args -c "${CURRENT_MODULE_DIR}/aqt_config.ini")
if (DOWNLOAD_QT_TOOL)
if (tool)
set(prefix "${base_path}/Tools")
set(install_args ${install_args} install-tool --outputdir ${base_path} ${host} desktop ${target})
else()
set(prefix "${base_path}/${target}/${arch_path}")
if (host_arch_path)
set(host_flag "--autodesktop")
set(host_prefix "${base_path}/${target}/${host_arch_path}")
endif()
set(install_args ${install_args} install-qt --outputdir ${base_path} ${host} ${type} ${target} ${arch} ${host_flag}
-m qtmultimedia --archives qttranslations qttools qtsvg qtbase)
set(install_args ${install_args} install-qt --outputdir ${base_path} ${host} ${type} ${target} ${arch}
-m qtmultimedia --archives qttranslations qttools qtsvg qtbase)
endif()
if (NOT EXISTS "${prefix}")
message(STATUS "Downloading binaries for Qt...")
message(STATUS "Downloading Qt binaries for ${target}:${host}:${type}:${arch}:${arch_path}")
set(AQT_PREBUILD_BASE_URL "https://github.com/miurahr/aqtinstall/releases/download/v3.1.9")
if (WIN32)
set(aqt_path "${base_path}/aqt.exe")
file(DOWNLOAD
${AQT_PREBUILD_BASE_URL}/aqt.exe
${aqt_path} SHOW_PROGRESS)
if (NOT EXISTS "${aqt_path}")
file(DOWNLOAD
${AQT_PREBUILD_BASE_URL}/aqt.exe
${aqt_path} SHOW_PROGRESS)
endif()
execute_process(COMMAND ${aqt_path} ${install_args}
WORKING_DIRECTORY ${base_path})
elseif (APPLE)
set(aqt_path "${base_path}/aqt-macos")
file(DOWNLOAD
${AQT_PREBUILD_BASE_URL}/aqt-macos
${aqt_path} SHOW_PROGRESS)
if (NOT EXISTS "${aqt_path}")
file(DOWNLOAD
${AQT_PREBUILD_BASE_URL}/aqt-macos
${aqt_path} SHOW_PROGRESS)
endif()
execute_process(COMMAND chmod +x ${aqt_path})
execute_process(COMMAND ${aqt_path} ${install_args}
WORKING_DIRECTORY ${base_path})
@@ -96,18 +135,38 @@ function(download_qt target)
execute_process(COMMAND ${CMAKE_COMMAND} -E env PYTHONPATH=${aqt_install_path} python3 -m aqt ${install_args}
WORKING_DIRECTORY ${base_path})
endif()
message(STATUS "Downloaded Qt binaries for ${target}:${host}:${type}:${arch}:${arch_path} to ${prefix}")
endif()
message(STATUS "Using downloaded Qt binaries at ${prefix}")
set(${prefix_out} "${prefix}" PARENT_SCOPE)
endfunction()
# Add the Qt prefix path so CMake can locate it.
# This function downloads Qt using aqt.
# The path of the downloaded content will be added to the CMAKE_PREFIX_PATH.
# QT_TARGET_PATH is set to the Qt for the compile target platform.
# QT_HOST_PATH is set to a host-compatible Qt, for running tools.
# Params:
# target: Qt dependency to install. Specify a version number to download Qt, or "tools_(name)" for a specific build tool.
function(download_qt target)
determine_qt_parameters("${target}" host type arch arch_path host_type host_arch host_arch_path)
get_external_prefix(qt base_path)
file(MAKE_DIRECTORY "${base_path}")
download_qt_configuration(prefix "${target}" "${host}" "${type}" "${arch}" "${arch_path}" "${base_path}")
if (DEFINED host_arch_path AND NOT "${host_arch_path}" STREQUAL "${arch_path}")
download_qt_configuration(host_prefix "${target}" "${host}" "${host_type}" "${host_arch}" "${host_arch_path}" "${base_path}")
else()
set(host_prefix "${prefix}")
endif()
set(QT_TARGET_PATH "${prefix}" CACHE STRING "")
set(QT_HOST_PATH "${host_prefix}" CACHE STRING "")
# Add the target Qt prefix path so CMake can locate it.
list(APPEND CMAKE_PREFIX_PATH "${prefix}")
set(CMAKE_PREFIX_PATH ${CMAKE_PREFIX_PATH} PARENT_SCOPE)
if (DEFINED host_prefix)
message(STATUS "Using downloaded host Qt binaries at ${host_prefix}")
set(QT_HOST_PATH "${host_prefix}" CACHE STRING "")
endif()
endfunction()
function(download_moltenvk)

View File

@@ -287,5 +287,13 @@ dumptxt -p $[OUT] "nfcSecret1Seed=$[NFC_SEED_1]"
dumptxt -p $[OUT] "nfcSecret1HmacKey=$[NFC_HMAC_KEY_1]"
dumptxt -p $[OUT] "nfcIv=$[NFC_IV]"
# Dump seeddb.bin as well
set SEEDDB_IN "0:/gm9/out/seeddb.bin"
set SEEDDB_OUT "0:/gm9/seeddb.bin"
sdump -w seeddb.bin
cp -w $[SEEDDB_IN] $[SEEDDB_OUT]
@Exit

View File

@@ -6,5 +6,5 @@ Usage:
1. Copy "DumpKeys.gm9" into the "gm9/scripts/" directory on your SD card.
2. Launch GodMode9, press the HOME button, select Scripts, and select "DumpKeys" from the list of scripts that appears.
3. Wait for the script to complete and return you to the GodMode9 main menu.
4. Power off your system and copy the "gm9/aes_keys.txt" file off of your SD card into "(Citra directory)/sysdata/".
4. Power off your system and copy the "gm9/aes_keys.txt" and "gm9/seeddb.bin" files off of your SD card into "(Citra directory)/sysdata/".

View File

@@ -11,3 +11,4 @@ type = QT
file_filter = ../../src/android/app/src/main/res/values-<lang>/strings.xml
source_file = ../../src/android/app/src/main/res/values/strings.xml
type = ANDROID
lang_map = es_ES:es, hu_HU:hu, ru_RU:ru, pt_BR:pt, zh_CN:zh

View File

@@ -57,6 +57,12 @@ if(USE_SYSTEM_CRYPTOPP)
add_library(cryptopp INTERFACE)
target_link_libraries(cryptopp INTERFACE cryptopp::cryptopp)
else()
if (WIN32 AND NOT MSVC AND "arm64" IN_LIST ARCHITECTURE)
# TODO: CryptoPP ARM64 ASM does not seem to support Windows unless compiled with MSVC.
# TODO: See https://github.com/weidai11/cryptopp/issues/1260
set(CRYPTOPP_DISABLE_ASM ON CACHE BOOL "")
endif()
set(CRYPTOPP_BUILD_DOCUMENTATION OFF CACHE BOOL "")
set(CRYPTOPP_BUILD_TESTING OFF CACHE BOOL "")
set(CRYPTOPP_INSTALL OFF CACHE BOOL "")
@@ -235,6 +241,18 @@ endif()
# DiscordRPC
if (USE_DISCORD_PRESENCE)
# rapidjson used by discord-rpc is old and doesn't correctly detect endianness for some platforms.
include(TestBigEndian)
test_big_endian(RAPIDJSON_BIG_ENDIAN)
if(RAPIDJSON_BIG_ENDIAN)
add_compile_definitions(RAPIDJSON_ENDIAN=1)
else()
add_compile_definitions(RAPIDJSON_ENDIAN=0)
endif()
# Apply a dummy CLANG_FORMAT_SUFFIX to disable discord-rpc's unnecessary automatic clang-format.
set(CLANG_FORMAT_SUFFIX "dummy")
add_subdirectory(discord-rpc EXCLUDE_FROM_ALL)
target_include_directories(discord-rpc INTERFACE ./discord-rpc/include)
endif()
@@ -276,11 +294,20 @@ endif()
add_library(httplib INTERFACE)
if(USE_SYSTEM_CPP_HTTPLIB)
find_package(CppHttp 0.14.1)
if(CppHttp_FOUND)
target_link_libraries(httplib INTERFACE httplib::httplib)
else()
message(STATUS "Cpp-httplib not found or not suitable version! Falling back to bundled...")
# Detect if system cpphttplib is a shared library
# this breaks building as Citra relies on functions that are moved
# into the shared object.
get_target_property(HTTP_LIBS httplib::httplib INTERFACE_LINK_LIBRARIES)
if(HTTP_LIBS)
message(WARNING "Shared cpp-http (${HTTP_LIBS}) not supported. Falling back to bundled...")
target_include_directories(httplib SYSTEM INTERFACE ./httplib)
else()
if(CppHttp_FOUND)
target_link_libraries(httplib INTERFACE httplib::httplib)
else()
message(STATUS "Cpp-httplib not found or not suitable version! Falling back to bundled...")
target_include_directories(httplib SYSTEM INTERFACE ./httplib)
endif()
endif()
else()
target_include_directories(httplib SYSTEM INTERFACE ./httplib)

View File

@@ -10,7 +10,7 @@ plugins {
id("org.jetbrains.kotlin.android")
id("de.undercouch.download") version "5.5.0"
id("kotlin-parcelize")
kotlin("plugin.serialization") version "1.8.21"
kotlin("plugin.serialization") version "1.9.22"
id("androidx.navigation.safeargs.kotlin")
}
@@ -173,23 +173,23 @@ android {
dependencies {
implementation("androidx.recyclerview:recyclerview:1.3.2")
implementation("androidx.activity:activity-ktx:1.8.0")
implementation("androidx.activity:activity-ktx:1.8.2")
implementation("androidx.fragment:fragment-ktx:1.6.2")
implementation("androidx.appcompat:appcompat:1.6.1")
implementation("androidx.documentfile:documentfile:1.0.1")
implementation("androidx.lifecycle:lifecycle-viewmodel-ktx:2.6.1")
implementation("androidx.lifecycle:lifecycle-viewmodel-ktx:2.7.0")
implementation("androidx.slidingpanelayout:slidingpanelayout:1.2.0")
implementation("com.google.android.material:material:1.9.0")
implementation("androidx.core:core-splashscreen:1.0.1")
implementation("androidx.work:work-runtime:2.8.1")
implementation("androidx.work:work-runtime:2.9.0")
implementation("org.ini4j:ini4j:0.5.4")
implementation("androidx.swiperefreshlayout:swiperefreshlayout:1.1.0")
implementation("androidx.navigation:navigation-fragment-ktx:2.7.5")
implementation("androidx.navigation:navigation-ui-ktx:2.7.5")
implementation("androidx.navigation:navigation-fragment-ktx:2.7.6")
implementation("androidx.navigation:navigation-ui-ktx:2.7.6")
implementation("info.debatty:java-string-similarity:2.0.0")
implementation("org.jetbrains.kotlinx:kotlinx-serialization-json:1.5.0")
implementation("org.jetbrains.kotlinx:kotlinx-serialization-json:1.6.2")
implementation("androidx.preference:preference-ktx:1.2.1")
implementation("io.coil-kt:coil:2.2.2")
implementation("io.coil-kt:coil:2.5.0")
}
// Download Vulkan Validation Layers from the KhronosGroup GitHub.

View File

@@ -42,6 +42,9 @@
android:banner="@mipmap/ic_launcher"
android:requestLegacyExternalStorage="true">
<meta-data android:name="android.game_mode_config"
android:resource="@xml/game_mode_config" />
<activity
android:name="org.citra.citra_emu.ui.main.MainActivity"
android:theme="@style/Theme.Citra.Splash.Main"

View File

@@ -9,10 +9,13 @@ import android.app.Application
import android.app.NotificationChannel
import android.app.NotificationManager
import android.content.Context
import android.os.Build
import org.citra.citra_emu.utils.DirectoryInitialization
import org.citra.citra_emu.utils.DocumentsTree
import org.citra.citra_emu.utils.GpuDriverHelper
import org.citra.citra_emu.utils.PermissionsHandler
import org.citra.citra_emu.utils.Log
import org.citra.citra_emu.utils.MemoryUtil
class CitraApplication : Application() {
private fun createNotificationChannel() {
@@ -53,9 +56,20 @@ class CitraApplication : Application() {
}
NativeLibrary.logDeviceInfo()
logDeviceInfo()
createNotificationChannel()
}
fun logDeviceInfo() {
Log.info("Device Manufacturer - ${Build.MANUFACTURER}")
Log.info("Device Model - ${Build.MODEL}")
if (Build.VERSION.SDK_INT > Build.VERSION_CODES.R) {
Log.info("SoC Manufacturer - ${Build.SOC_MANUFACTURER}")
Log.info("SoC Model - ${Build.SOC_MODEL}")
}
Log.info("Total System Memory - ${MemoryUtil.getDeviceRAM()}")
}
companion object {
private var application: CitraApplication? = null

View File

@@ -413,12 +413,12 @@ object NativeLibrary {
}
fun setEmulationActivity(emulationActivity: EmulationActivity?) {
Log.verbose("[NativeLibrary] Registering EmulationActivity.")
Log.debug("[NativeLibrary] Registering EmulationActivity.")
sEmulationActivity = WeakReference(emulationActivity)
}
fun clearEmulationActivity() {
Log.verbose("[NativeLibrary] Unregistering EmulationActivity.")
Log.debug("[NativeLibrary] Unregistering EmulationActivity.")
sEmulationActivity.clear()
}

View File

@@ -94,14 +94,14 @@ object DirectoryInitialization {
val dataPath = PermissionsHandler.citraDirectory
if (dataPath.toString().isNotEmpty()) {
userPath = dataPath.toString()
Log.debug("[DirectoryInitialization] User Dir: $userPath")
android.util.Log.d("[Citra Frontend]", "[DirectoryInitialization] User Dir: $userPath")
return true
}
return false
}
private fun copyAsset(asset: String, output: File, overwrite: Boolean, context: Context) {
Log.verbose("[DirectoryInitialization] Copying File $asset to $output")
Log.debug("[DirectoryInitialization] Copying File $asset to $output")
try {
if (!output.exists() || overwrite) {
val inputStream = context.assets.open(asset)
@@ -121,7 +121,7 @@ object DirectoryInitialization {
overwrite: Boolean,
context: Context
) {
Log.verbose("[DirectoryInitialization] Copying Folder $assetFolder to $outputFolder")
Log.debug("[DirectoryInitialization] Copying Folder $assetFolder to $outputFolder")
try {
var createdFolder = false
for (file in context.assets.list(assetFolder)!!) {

View File

@@ -4,34 +4,17 @@
package org.citra.citra_emu.utils
import android.util.Log
import org.citra.citra_emu.BuildConfig
/**
* Contains methods that call through to [android.util.Log], but
* with the same TAG automatically provided. Also no-ops VERBOSE and DEBUG log
* levels in release builds.
*/
object Log {
// Tracks whether we should share the old log or the current log
var gameLaunched = false
private const val TAG = "Citra Frontend"
fun verbose(message: String?) {
if (BuildConfig.DEBUG) {
Log.v(TAG, message!!)
}
}
external fun debug(message: String)
fun debug(message: String?) {
if (BuildConfig.DEBUG) {
Log.d(TAG, message!!)
}
}
external fun warning(message: String)
fun info(message: String?) = Log.i(TAG, message!!)
external fun info(message: String)
fun warning(message: String?) = Log.w(TAG, message!!)
external fun error(message: String)
fun error(message: String?) = Log.e(TAG, message!!)
external fun critical(message: String)
}

View File

@@ -0,0 +1,108 @@
// SPDX-FileCopyrightText: 2023 yuzu Emulator Project
// SPDX-License-Identifier: GPL-2.0-or-later
package org.citra.citra_emu.utils
import android.app.ActivityManager
import android.content.Context
import android.os.Build
import org.citra.citra_emu.CitraApplication
import org.citra.citra_emu.R
import java.util.Locale
import kotlin.math.ceil
object MemoryUtil {
private val context get() = CitraApplication.appContext
private val Float.hundredths: String
get() = String.format(Locale.ROOT, "%.2f", this)
const val Kb: Float = 1024F
const val Mb = Kb * 1024
const val Gb = Mb * 1024
const val Tb = Gb * 1024
const val Pb = Tb * 1024
const val Eb = Pb * 1024
fun bytesToSizeUnit(size: Float, roundUp: Boolean = false): String =
when {
size < Kb -> {
context.getString(
R.string.memory_formatted,
size.hundredths,
context.getString(R.string.memory_byte_shorthand)
)
}
size < Mb -> {
context.getString(
R.string.memory_formatted,
if (roundUp) ceil(size / Kb) else (size / Kb).hundredths,
context.getString(R.string.memory_kilobyte)
)
}
size < Gb -> {
context.getString(
R.string.memory_formatted,
if (roundUp) ceil(size / Mb) else (size / Mb).hundredths,
context.getString(R.string.memory_megabyte)
)
}
size < Tb -> {
context.getString(
R.string.memory_formatted,
if (roundUp) ceil(size / Gb) else (size / Gb).hundredths,
context.getString(R.string.memory_gigabyte)
)
}
size < Pb -> {
context.getString(
R.string.memory_formatted,
if (roundUp) ceil(size / Tb) else (size / Tb).hundredths,
context.getString(R.string.memory_terabyte)
)
}
size < Eb -> {
context.getString(
R.string.memory_formatted,
if (roundUp) ceil(size / Pb) else (size / Pb).hundredths,
context.getString(R.string.memory_petabyte)
)
}
else -> {
context.getString(
R.string.memory_formatted,
if (roundUp) ceil(size / Eb) else (size / Eb).hundredths,
context.getString(R.string.memory_exabyte)
)
}
}
val totalMemory: Float
get() {
val memInfo = ActivityManager.MemoryInfo()
with(context.getSystemService(Context.ACTIVITY_SERVICE) as ActivityManager) {
getMemoryInfo(memInfo)
}
return if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.UPSIDE_DOWN_CAKE) {
memInfo.advertisedMem.toFloat()
} else {
memInfo.totalMem.toFloat()
}
}
fun isLessThan(minimum: Int, size: Float): Boolean =
when (size) {
Kb -> totalMemory < Mb && totalMemory < minimum
Mb -> totalMemory < Gb && (totalMemory / Mb) < minimum
Gb -> totalMemory < Tb && (totalMemory / Gb) < minimum
Tb -> totalMemory < Pb && (totalMemory / Tb) < minimum
Pb -> totalMemory < Eb && (totalMemory / Pb) < minimum
Eb -> totalMemory / Eb < minimum
else -> totalMemory < Kb && totalMemory < minimum
}
// Devices are unlikely to have 0.5GB increments of memory so we'll just round up to account for
// the potential error created by memInfo.totalMem
fun getDeviceRAM(): String = bytesToSizeUnit(totalMemory, true)
}

View File

@@ -28,6 +28,7 @@ add_library(citra-android SHARED
ndk_motion.cpp
ndk_motion.h
system_save_game.cpp
native_log.cpp
)
target_link_libraries(citra-android PRIVATE audio_core citra_common citra_core input_common network)

View File

@@ -0,0 +1,30 @@
// SPDX-FileCopyrightText: 2023 yuzu Emulator Project
// SPDX-License-Identifier: GPL-2.0-or-later
#include <common/logging/log.h>
#include <jni.h>
#include "android_common/android_common.h"
extern "C" {
void Java_org_citra_citra_1emu_utils_Log_debug(JNIEnv* env, jobject obj, jstring jmessage) {
LOG_DEBUG(Frontend, "{}", GetJString(env, jmessage));
}
void Java_org_citra_citra_1emu_utils_Log_warning(JNIEnv* env, jobject obj, jstring jmessage) {
LOG_WARNING(Frontend, "{}", GetJString(env, jmessage));
}
void Java_org_citra_citra_1emu_utils_Log_info(JNIEnv* env, jobject obj, jstring jmessage) {
LOG_INFO(Frontend, "{}", GetJString(env, jmessage));
}
void Java_org_citra_citra_1emu_utils_Log_error(JNIEnv* env, jobject obj, jstring jmessage) {
LOG_ERROR(Frontend, "{}", GetJString(env, jmessage));
}
void Java_org_citra_citra_1emu_utils_Log_critical(JNIEnv* env, jobject obj, jstring jmessage) {
LOG_CRITICAL(Frontend, "{}", GetJString(env, jmessage));
}
} // extern "C"

View File

@@ -442,6 +442,17 @@
<string name="cia_install_error_encrypted">\"%s\" must be decrypted before being used with Citra.\n A real 3DS is required</string>
<string name="cia_install_error_unknown">An unknown error occurred while installing \"%s\".\n Please see the log for more details</string>
<!-- Memory Sizes -->
<string name="memory_formatted">%1$s %2$s</string>
<string name="memory_byte">Byte</string>
<string name="memory_byte_shorthand">B</string>
<string name="memory_kilobyte">KB</string>
<string name="memory_megabyte">MB</string>
<string name="memory_gigabyte">GB</string>
<string name="memory_terabyte">TB</string>
<string name="memory_petabyte">PB</string>
<string name="memory_exabyte">EB</string>
<!-- Theme Modes -->
<string name="change_theme_mode">Change Theme Mode</string>
<string name="theme_mode_follow_system">Follow System</string>

View File

@@ -0,0 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<game-mode-config
xmlns:android="http://schemas.android.com/apk/res/android"
android:supportsBatteryGameMode="true"
android:supportsPerformanceGameMode="true"
android:allowGameDownscaling="false"
android:allowGameFpsOverride="false"/>

View File

@@ -4,10 +4,10 @@
// Top-level build file where you can add configuration options common to all sub-projects/modules.
plugins {
id("com.android.application") version "8.1.2" apply false
id("com.android.library") version "8.1.2" apply false
id("org.jetbrains.kotlin.android") version "1.8.21" apply false
id("org.jetbrains.kotlin.plugin.serialization") version "1.8.21"
id("com.android.application") version "8.2.1" apply false
id("com.android.library") version "8.2.1" apply false
id("org.jetbrains.kotlin.android") version "1.9.22" apply false
id("org.jetbrains.kotlin.plugin.serialization") version "1.9.22"
}
tasks.register("clean").configure {
@@ -19,6 +19,6 @@ buildscript {
google()
}
dependencies {
classpath("androidx.navigation:navigation-safe-args-gradle-plugin:2.7.5")
classpath("androidx.navigation:navigation-safe-args-gradle-plugin:2.7.6")
}
}

View File

@@ -3,4 +3,4 @@ distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-8.0-bin.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-8.2-bin.zip

View File

@@ -316,7 +316,7 @@ struct SourceStatus {
u16_le sync_count; ///< Is set by the DSP to the value of SourceConfiguration::sync_count
u32_dsp buffer_position; ///< Number of samples into the current buffer
u16_le current_buffer_id; ///< Updated when a buffer finishes playing
INSERT_PADDING_DSPWORDS(1);
u16_le last_buffer_id; ///< Updated when all buffers in the queue finish playing
};
Status status[num_sources];

View File

@@ -298,9 +298,9 @@ void Source::ParseConfig(SourceConfiguration::Configuration& config,
b.buffer_id,
state.mono_or_stereo,
state.format,
true,
{}, // 0 in u32_dsp
false,
true, // from_queue
0, // play_position
false, // has_played
});
}
LOG_TRACE(Audio_DSP, "enqueuing queued {} addr={:#010x} len={} id={}", i,
@@ -321,16 +321,19 @@ void Source::ParseConfig(SourceConfiguration::Configuration& config,
void Source::GenerateFrame() {
current_frame.fill({});
if (state.current_buffer.empty() && !DequeueBuffer()) {
if (state.current_buffer.empty()) {
// TODO(SachinV): Should dequeue happen at the end of the frame generation?
if (DequeueBuffer()) {
return;
}
state.enabled = false;
state.buffer_update = true;
state.last_buffer_id = state.current_buffer_id;
state.current_buffer_id = 0;
return;
}
std::size_t frame_position = 0;
state.current_sample_number = state.next_sample_number;
while (frame_position < current_frame.size()) {
if (state.current_buffer.empty() && !DequeueBuffer()) {
break;
@@ -357,7 +360,7 @@ void Source::GenerateFrame() {
}
// TODO(jroweboy): Keep track of frame_position independently so that it doesn't lose precision
// over time
state.next_sample_number += static_cast<u32>(frame_position * state.rate_multiplier);
state.current_sample_number += static_cast<u32>(frame_position * state.rate_multiplier);
state.filters.ProcessFrame(current_frame);
}
@@ -408,9 +411,9 @@ bool Source::DequeueBuffer() {
// the first playthrough starts at play_position, loops start at the beginning of the buffer
state.current_sample_number = (!buf.has_played) ? buf.play_position : 0;
state.next_sample_number = state.current_sample_number;
state.current_buffer_physical_address = buf.physical_address;
state.current_buffer_id = buf.buffer_id;
state.last_buffer_id = 0;
state.buffer_update = buf.from_queue && !buf.has_played;
if (buf.is_looping) {
@@ -418,8 +421,17 @@ bool Source::DequeueBuffer() {
state.input_queue.push(buf);
}
LOG_TRACE(Audio_DSP, "source_id={} buffer_id={} from_queue={} current_buffer.size()={}",
source_id, buf.buffer_id, buf.from_queue, state.current_buffer.size());
// Because our interpolation consumes samples instead of using an index,
// let's just consume the samples up to the current sample number.
state.current_buffer.erase(
state.current_buffer.begin(),
std::next(state.current_buffer.begin(), state.current_sample_number));
LOG_TRACE(Audio_DSP,
"source_id={} buffer_id={} from_queue={} current_buffer.size()={}, "
"buf.has_played={}, buf.play_position={}",
source_id, buf.buffer_id, buf.from_queue, state.current_buffer.size(), buf.has_played,
buf.play_position);
return true;
}
@@ -432,9 +444,10 @@ SourceStatus::Status Source::GetCurrentStatus() {
ret.is_enabled = state.enabled;
ret.current_buffer_id_dirty = state.buffer_update ? 1 : 0;
state.buffer_update = false;
ret.current_buffer_id = state.current_buffer_id;
ret.buffer_position = state.current_sample_number;
ret.sync_count = state.sync_count;
ret.buffer_position = state.current_sample_number;
ret.current_buffer_id = state.current_buffer_id;
ret.last_buffer_id = state.last_buffer_id;
return ret;
}

View File

@@ -87,8 +87,8 @@ private:
Format format;
bool from_queue;
u32_dsp play_position; // = 0;
bool has_played; // = false;
u32 play_position; // = 0;
bool has_played; // = false;
private:
template <class Archive>
@@ -136,14 +136,14 @@ private:
// Current buffer
u32 current_sample_number = 0;
u32 next_sample_number = 0;
PAddr current_buffer_physical_address = 0;
AudioInterp::StereoBuffer16 current_buffer = {};
// buffer_id state
bool buffer_update = false;
u32 current_buffer_id = 0;
u16 last_buffer_id = 0;
u16 current_buffer_id = 0;
// Decoding state
@@ -170,7 +170,6 @@ private:
ar& mono_or_stereo;
ar& format;
ar& current_sample_number;
ar& next_sample_number;
ar& current_buffer_physical_address;
ar& current_buffer;
ar& buffer_update;

View File

@@ -54,7 +54,7 @@ const std::array<std::array<int, 5>, Settings::NativeAnalog::NumAnalogs> Config:
// This must be in alphabetical order according to action name as it must have the same order as
// UISetting::values.shortcuts, which is alphabetically ordered.
// clang-format off
const std::array<UISettings::Shortcut, 30> Config::default_hotkeys {{
const std::array<UISettings::Shortcut, 35> Config::default_hotkeys {{
{QStringLiteral("Advance Frame"), QStringLiteral("Main Window"), {QStringLiteral(""), Qt::ApplicationShortcut}},
{QStringLiteral("Audio Mute/Unmute"), QStringLiteral("Main Window"), {QStringLiteral("Ctrl+M"), Qt::WindowShortcut}},
{QStringLiteral("Audio Volume Down"), QStringLiteral("Main Window"), {QStringLiteral(""), Qt::WindowShortcut}},
@@ -71,6 +71,11 @@ const std::array<UISettings::Shortcut, 30> Config::default_hotkeys {{
{QStringLiteral("Load Amiibo"), QStringLiteral("Main Window"), {QStringLiteral("F2"), Qt::WidgetWithChildrenShortcut}},
{QStringLiteral("Load File"), QStringLiteral("Main Window"), {QStringLiteral("Ctrl+O"), Qt::WidgetWithChildrenShortcut}},
{QStringLiteral("Load from Newest Slot"), QStringLiteral("Main Window"), {QStringLiteral("Ctrl+V"), Qt::WindowShortcut}},
{QStringLiteral("Multiplayer Browse Public Game Lobby"), QStringLiteral("Main Window"), {QStringLiteral("Ctrl+B"), Qt::ApplicationShortcut}},
{QStringLiteral("Multiplayer Create Room"), QStringLiteral("Main Window"), {QStringLiteral("Ctrl+N"), Qt::ApplicationShortcut}},
{QStringLiteral("Multiplayer Direct Connect to Room"), QStringLiteral("Main Window"), {QStringLiteral("Ctrl+Shift"), Qt::ApplicationShortcut}},
{QStringLiteral("Multiplayer Leave Room"), QStringLiteral("Main Window"), {QStringLiteral("Ctrl+L"), Qt::ApplicationShortcut}},
{QStringLiteral("Multiplayer Show Current Room"), QStringLiteral("Main Window"), {QStringLiteral("Ctrl+R"), Qt::ApplicationShortcut}},
{QStringLiteral("Remove Amiibo"), QStringLiteral("Main Window"), {QStringLiteral("F3"), Qt::ApplicationShortcut}},
{QStringLiteral("Restart Emulation"), QStringLiteral("Main Window"), {QStringLiteral("F6"), Qt::WindowShortcut}},
{QStringLiteral("Rotate Screens Upright"), QStringLiteral("Main Window"), {QStringLiteral("F8"), Qt::WindowShortcut}},
@@ -557,6 +562,15 @@ void Config::ReadMultiplayerValues() {
UISettings::values.game_id = ReadSetting(QStringLiteral("game_id"), 0).toULongLong();
UISettings::values.room_description =
ReadSetting(QStringLiteral("room_description"), QString{}).toString();
UISettings::values.multiplayer_filter_text =
ReadSetting(QStringLiteral("multiplayer_filter_text"), QString{}).toString();
UISettings::values.multiplayer_filter_games_owned =
ReadSetting(QStringLiteral("multiplayer_filter_games_owned"), false).toBool();
UISettings::values.multiplayer_filter_hide_empty =
ReadSetting(QStringLiteral("multiplayer_filter_hide_empty"), false).toBool();
UISettings::values.multiplayer_filter_hide_full =
ReadSetting(QStringLiteral("multiplayer_filter_hide_full"), false).toBool();
// Read ban list back
int size = qt_config->beginReadArray(QStringLiteral("username_ban_list"));
UISettings::values.ban_list.first.resize(size);
@@ -1074,6 +1088,15 @@ void Config::SaveMultiplayerValues() {
WriteSetting(QStringLiteral("game_id"), UISettings::values.game_id, 0);
WriteSetting(QStringLiteral("room_description"), UISettings::values.room_description,
QString{});
WriteSetting(QStringLiteral("multiplayer_filter_text"),
UISettings::values.multiplayer_filter_text, QString{});
WriteSetting(QStringLiteral("multiplayer_filter_games_owned"),
UISettings::values.multiplayer_filter_games_owned, false);
WriteSetting(QStringLiteral("multiplayer_filter_hide_empty"),
UISettings::values.multiplayer_filter_hide_empty, false);
WriteSetting(QStringLiteral("multiplayer_filter_hide_full"),
UISettings::values.multiplayer_filter_hide_full, false);
// Write ban list
qt_config->beginWriteArray(QStringLiteral("username_ban_list"));
for (std::size_t i = 0; i < UISettings::values.ban_list.first.size(); ++i) {

View File

@@ -26,7 +26,7 @@ public:
static const std::array<int, Settings::NativeButton::NumButtons> default_buttons;
static const std::array<std::array<int, 5>, Settings::NativeAnalog::NumAnalogs> default_analogs;
static const std::array<UISettings::Shortcut, 30> default_hotkeys;
static const std::array<UISettings::Shortcut, 35> default_hotkeys;
private:
void Initialize(const std::string& config_name);

View File

@@ -647,6 +647,13 @@ void GMainWindow::InitializeHotkeys() {
link_action_shortcut(ui->action_Advance_Frame, QStringLiteral("Advance Frame"));
link_action_shortcut(ui->action_Load_from_Newest_Slot, QStringLiteral("Load from Newest Slot"));
link_action_shortcut(ui->action_Save_to_Oldest_Slot, QStringLiteral("Save to Oldest Slot"));
link_action_shortcut(ui->action_View_Lobby,
QStringLiteral("Multiplayer Browse Public Game Lobby"));
link_action_shortcut(ui->action_Start_Room, QStringLiteral("Multiplayer Create Room"));
link_action_shortcut(ui->action_Connect_To_Room,
QStringLiteral("Multiplayer Direct Connect to Room"));
link_action_shortcut(ui->action_Show_Room, QStringLiteral("Multiplayer Show Current Room"));
link_action_shortcut(ui->action_Leave_Room, QStringLiteral("Multiplayer Leave Room"));
const auto add_secondary_window_hotkey = [this](QKeySequence hotkey, const char* slot) {
// This action will fire specifically when secondary_window is in focus
@@ -3190,8 +3197,10 @@ int main(int argc, char* argv[]) {
QApplication::setHighDpiScaleFactorRoundingPolicy(rounding_policy);
#ifdef __APPLE__
std::string bin_path = FileUtil::GetBundleDirectory() + DIR_SEP + "..";
chdir(bin_path.c_str());
auto bundle_dir = FileUtil::GetBundleDirectory();
if (bundle_dir) {
FileUtil::SetCurrentDir(bundle_dir.value() + "..");
}
#endif
#ifdef ENABLE_OPENGL

View File

@@ -80,9 +80,8 @@ void DirectConnectWindow::Connect() {
// Store settings
UISettings::values.nickname = ui->nickname->text();
UISettings::values.ip = ui->ip->text();
UISettings::values.port = (ui->port->isModified() && !ui->port->text().isEmpty())
? ui->port->text()
: UISettings::values.port;
UISettings::values.port =
!ui->port->text().isEmpty() ? ui->port->text() : UISettings::values.port;
// attempt to connect in a different thread
QFuture<void> f = QtConcurrent::run([&] {

View File

@@ -63,10 +63,10 @@ Lobby::Lobby(Core::System& system_, QWidget* parent, QStandardItemModel* list,
// UI Buttons
connect(ui->refresh_list, &QPushButton::clicked, this, &Lobby::RefreshLobby);
connect(ui->search, &QLineEdit::textChanged, proxy, &LobbyFilterProxyModel::SetFilterSearch);
connect(ui->games_owned, &QCheckBox::toggled, proxy, &LobbyFilterProxyModel::SetFilterOwned);
connect(ui->hide_empty, &QCheckBox::toggled, proxy, &LobbyFilterProxyModel::SetFilterEmpty);
connect(ui->hide_full, &QCheckBox::toggled, proxy, &LobbyFilterProxyModel::SetFilterFull);
connect(ui->search, &QLineEdit::textChanged, proxy, &LobbyFilterProxyModel::SetFilterSearch);
connect(ui->room_list, &QTreeView::doubleClicked, this, &Lobby::OnJoinRoom);
connect(ui->room_list, &QTreeView::clicked, this, &Lobby::OnExpandRoom);
@@ -74,6 +74,12 @@ Lobby::Lobby(Core::System& system_, QWidget* parent, QStandardItemModel* list,
connect(&room_list_watcher, &QFutureWatcher<AnnounceMultiplayerRoom::RoomList>::finished, this,
&Lobby::OnRefreshLobby);
// Load persistent filters after events are connected to make sure they apply
ui->search->setText(UISettings::values.multiplayer_filter_text);
ui->games_owned->setChecked(UISettings::values.multiplayer_filter_games_owned);
ui->hide_empty->setChecked(UISettings::values.multiplayer_filter_hide_empty);
ui->hide_full->setChecked(UISettings::values.multiplayer_filter_hide_full);
// manually start a refresh when the window is opening
// TODO(jroweboy): if this refresh is slow for people with bad internet, then don't do it as
// part of the constructor, but offload the refresh until after the window shown. perhaps emit a
@@ -180,6 +186,10 @@ void Lobby::OnJoinRoom(const QModelIndex& source) {
UISettings::values.nickname = ui->nickname->text();
UISettings::values.ip = proxy->data(connection_index, LobbyItemHost::HostIPRole).toString();
UISettings::values.port = proxy->data(connection_index, LobbyItemHost::HostPortRole).toString();
UISettings::values.multiplayer_filter_text = ui->search->text();
UISettings::values.multiplayer_filter_games_owned = ui->games_owned->isChecked();
UISettings::values.multiplayer_filter_hide_empty = ui->hide_empty->isChecked();
UISettings::values.multiplayer_filter_hide_full = ui->hide_full->isChecked();
}
void Lobby::ResetModel() {

View File

@@ -188,12 +188,37 @@ public:
}
QVariant data(int role) const override {
if (role != Qt::DisplayRole) {
switch (role) {
case Qt::DisplayRole: {
auto members = data(MemberListRole).toList();
return QStringLiteral("%1 / %2").arg(QString::number(members.size()),
data(MaxPlayerRole).toString());
}
case Qt::ForegroundRole: {
auto members = data(MemberListRole).toList();
auto max_players = data(MaxPlayerRole).toInt();
const QColor room_full_color(255, 48, 32);
const QColor room_almost_full_color(255, 140, 32);
const QColor room_has_players_color(32, 160, 32);
const QColor room_empty_color(128, 128, 128);
if (members.size() >= max_players) {
return QBrush(room_full_color);
} else if (members.size() == (max_players - 1)) {
return QBrush(room_almost_full_color);
} else if (members.size() == 0) {
return QBrush(room_empty_color);
} else if (members.size() > 0 && members.size() < (max_players - 1)) {
return QBrush(room_has_players_color);
}
// FIXME: How to return a value that tells Qt not to modify the
// text color from the default (as if Qt::ForegroundRole wasn't overridden)?
return QBrush(nullptr);
}
default:
return LobbyItem::data(role);
}
auto members = data(MemberListRole).toList();
return QStringLiteral("%1 / %2").arg(QString::number(members.size()),
data(MaxPlayerRole).toString());
}
bool operator<(const QStandardItem& other) const override {

View File

@@ -138,6 +138,11 @@ struct Values {
QString room_description;
std::pair<std::vector<std::string>, std::vector<std::string>> ban_list;
QString multiplayer_filter_text;
bool multiplayer_filter_games_owned;
bool multiplayer_filter_hide_empty;
bool multiplayer_filter_hide_full;
// logging
Settings::Setting<bool> show_console{false, "showConsole"};
};

View File

@@ -13,7 +13,6 @@
#endif
// The user data dir
#define ROOT_DIR "."
#define USERDATA_DIR "user"
#ifdef USER_DIR
#define EMU_DATA_DIR USER_DIR

View File

@@ -634,6 +634,10 @@ std::optional<std::string> GetCurrentDir() {
std::string strDir = dir;
#endif
free(dir);
if (!strDir.ends_with(DIR_SEP)) {
strDir += DIR_SEP;
}
return strDir;
} // namespace FileUtil
@@ -646,17 +650,36 @@ bool SetCurrentDir(const std::string& directory) {
}
#if defined(__APPLE__)
std::string GetBundleDirectory() {
CFURLRef BundleRef;
char AppBundlePath[MAXPATHLEN];
std::optional<std::string> GetBundleDirectory() {
// Get the main bundle for the app
BundleRef = CFBundleCopyBundleURL(CFBundleGetMainBundle());
CFStringRef BundlePath = CFURLCopyFileSystemPath(BundleRef, kCFURLPOSIXPathStyle);
CFStringGetFileSystemRepresentation(BundlePath, AppBundlePath, sizeof(AppBundlePath));
CFRelease(BundleRef);
CFRelease(BundlePath);
CFBundleRef bundle_ref = CFBundleGetMainBundle();
if (!bundle_ref) {
return {};
}
return AppBundlePath;
CFURLRef bundle_url_ref = CFBundleCopyBundleURL(bundle_ref);
if (!bundle_url_ref) {
return {};
}
SCOPE_EXIT({ CFRelease(bundle_url_ref); });
CFStringRef bundle_path_ref = CFURLCopyFileSystemPath(bundle_url_ref, kCFURLPOSIXPathStyle);
if (!bundle_path_ref) {
return {};
}
SCOPE_EXIT({ CFRelease(bundle_path_ref); });
char app_bundle_path[MAXPATHLEN];
if (!CFStringGetFileSystemRepresentation(bundle_path_ref, app_bundle_path,
sizeof(app_bundle_path))) {
return {};
}
std::string path_str(app_bundle_path);
if (!path_str.ends_with(DIR_SEP)) {
path_str += DIR_SEP;
}
return path_str;
}
#endif
@@ -732,22 +755,6 @@ static const std::string& GetHomeDirectory() {
}
#endif
std::string GetSysDirectory() {
std::string sysDir;
#if defined(__APPLE__)
sysDir = GetBundleDirectory();
sysDir += DIR_SEP;
sysDir += SYSDATA_DIR;
#else
sysDir = SYSDATA_DIR;
#endif
sysDir += DIR_SEP;
LOG_DEBUG(Common_Filesystem, "Setting to {}:", sysDir);
return sysDir;
}
namespace {
std::unordered_map<UserPath, std::string> g_paths;
std::unordered_map<UserPath, std::string> g_default_paths;
@@ -777,8 +784,10 @@ void SetUserPath(const std::string& path) {
g_paths.emplace(UserPath::ConfigDir, user_path + CONFIG_DIR DIR_SEP);
g_paths.emplace(UserPath::CacheDir, user_path + CACHE_DIR DIR_SEP);
#else
if (FileUtil::Exists(ROOT_DIR DIR_SEP USERDATA_DIR)) {
user_path = ROOT_DIR DIR_SEP USERDATA_DIR DIR_SEP;
auto current_dir = FileUtil::GetCurrentDir();
if (current_dir.has_value() &&
FileUtil::Exists(current_dir.value() + USERDATA_DIR DIR_SEP)) {
user_path = current_dir.value() + USERDATA_DIR DIR_SEP;
g_paths.emplace(UserPath::ConfigDir, user_path + CONFIG_DIR DIR_SEP);
g_paths.emplace(UserPath::CacheDir, user_path + CACHE_DIR DIR_SEP);
} else {

View File

@@ -193,11 +193,8 @@ void SetCurrentRomPath(const std::string& path);
// Update the Global Path with the new value
void UpdateUserPath(UserPath path, const std::string& filename);
// Returns the path to where the sys file are
[[nodiscard]] std::string GetSysDirectory();
#ifdef __APPLE__
[[nodiscard]] std::string GetBundleDirectory();
[[nodiscard]] std::optional<std::string> GetBundleDirectory();
#endif
#ifdef _WIN32

View File

@@ -327,6 +327,10 @@ add_library(citra_core STATIC
hle/service/ldr_ro/cro_helper.h
hle/service/ldr_ro/ldr_ro.cpp
hle/service/ldr_ro/ldr_ro.h
hle/service/mcu/mcu_hwc.cpp
hle/service/mcu/mcu_hwc.h
hle/service/mcu/mcu.cpp
hle/service/mcu/mcu.h
hle/service/mic/mic_u.cpp
hle/service/mic/mic_u.h
hle/service/mvd/mvd.cpp

View File

@@ -14,18 +14,18 @@ namespace ConfigMem {
Handler::Handler() {
std::memset(&config_mem, 0, sizeof(config_mem));
// Values extracted from firmware 11.2.0-35E
config_mem.kernel_version_min = 0x34;
// Values extracted from firmware 11.17.0-50E
config_mem.kernel_version_min = 0x3a;
config_mem.kernel_version_maj = 0x2;
config_mem.ns_tid = 0x0004013000008002;
config_mem.sys_core_ver = 0x2;
config_mem.unit_info = 0x1; // Bit 0 set for Retail
config_mem.prev_firm = 0x1;
config_mem.ctr_sdk_ver = 0x0000F297;
config_mem.firm_version_min = 0x34;
config_mem.ctr_sdk_ver = 0x0000F450;
config_mem.firm_version_min = 0x3a;
config_mem.firm_version_maj = 0x2;
config_mem.firm_sys_core_ver = 0x2;
config_mem.firm_ctr_sdk_ver = 0x0000F297;
config_mem.firm_ctr_sdk_ver = 0x0000F450;
}
ConfigMemDef& Handler::GetConfigMem() {

View File

@@ -210,10 +210,10 @@ void Process::Set3dsxKernelCaps() {
};
// Similar to Rosalina, we set kernel version to a recent one.
// This is 11.2.0, to be consistent with core/hle/kernel/config_mem.cpp
// This is 11.17.0, to be consistent with core/hle/kernel/config_mem.cpp
// TODO: refactor kernel version out so it is configurable and consistent
// among all relevant places.
kernel_version = 0x234;
kernel_version = 0x23a;
}
void Process::Run(s32 main_thread_priority, u32 stack_size) {

View File

@@ -373,7 +373,10 @@ ResultVal<AppletManager::InitializeResult> AppletManager::Initialize(AppletId ap
if (active_slot == AppletSlot::Error) {
active_slot = slot;
// Wake up the application.
// APT automatically calls enable on the first registered applet.
Enable(attributes);
// Wake up the applet.
SendParameter({
.sender_id = AppletId::None,
.destination_id = app_id,
@@ -398,7 +401,8 @@ Result AppletManager::Enable(AppletAttributes attributes) {
auto slot_data = GetAppletSlot(slot);
slot_data->registered = true;
if (slot_data->attributes.applet_pos == AppletPos::System &&
if (slot_data->applet_id != AppletId::None &&
slot_data->attributes.applet_pos == AppletPos::System &&
slot_data->attributes.is_home_menu) {
slot_data->attributes.raw |= attributes.raw;
LOG_DEBUG(Service_APT, "Updated home menu attributes to {:08X}.",
@@ -786,16 +790,23 @@ Result AppletManager::PrepareToStartSystemApplet(AppletId applet_id) {
Result AppletManager::StartSystemApplet(AppletId applet_id, std::shared_ptr<Kernel::Object> object,
const std::vector<u8>& buffer) {
auto source_applet_id = AppletId::None;
auto source_applet_id = AppletId::Application;
if (last_system_launcher_slot != AppletSlot::Error) {
const auto slot_data = GetAppletSlot(last_system_launcher_slot);
source_applet_id = slot_data->applet_id;
const auto launcher_slot_data = GetAppletSlot(last_system_launcher_slot);
source_applet_id = launcher_slot_data->applet_id;
// If a system applet is launching another system applet, reset the slot to avoid conflicts.
// This is needed because system applets won't necessarily call CloseSystemApplet before
// exiting.
if (last_system_launcher_slot == AppletSlot::SystemApplet) {
slot_data->Reset();
// APT generally clears and terminates the caller of StartSystemApplet. This helps in
// situations such as a system applet launching another system applet, which would
// otherwise deadlock.
// TODO: In real APT, the check for AppletSlot::Application does not exist; there is
// TODO: something wrong with our implementation somewhere that makes this necessary.
// TODO: Otherwise, games that attempt to launch system applets will be cleared and
// TODO: emulation will crash.
if (!launcher_slot_data->registered ||
(last_system_launcher_slot != AppletSlot::Application &&
!launcher_slot_data->attributes.no_exit_on_system_applet)) {
launcher_slot_data->Reset();
// TODO: Implement launcher process termination.
}
}

View File

@@ -152,6 +152,7 @@ union AppletAttributes {
u32 raw;
BitField<0, 3, AppletPos> applet_pos;
BitField<28, 1, u32> no_exit_on_system_applet;
BitField<29, 1, u32> is_home_menu;
AppletAttributes() : raw(0) {}

File diff suppressed because it is too large Load Diff

View File

@@ -18,6 +18,7 @@
#include <boost/serialization/vector.hpp>
#include <boost/serialization/weak_ptr.hpp>
#include <httplib.h>
#include "common/thread.h"
#include "core/hle/ipc_helpers.h"
#include "core/hle/kernel/shared_memory.h"
#include "core/hle/service/service.h"
@@ -48,12 +49,25 @@ constexpr u32 TotalRequestMethods = 8;
enum class RequestState : u8 {
NotStarted = 0x1, // Request has not started yet.
InProgress = 0x5, // Request in progress, sending request over the network.
ReadyToDownloadContent = 0x7, // Ready to download the content. (needs verification)
ReadyToDownload = 0x8, // Ready to download?
ConnectingToServer = 0x5, // Request in progress, connecting to server.
SendingRequest = 0x6, // Request in progress, sending HTTP request.
ReceivingResponse = 0x7, // Request in progress, receiving HTTP response.
ReadyToDownloadContent = 0x8, // Ready to download the content.
TimedOut = 0xA, // Request timed out?
};
enum class PostDataEncoding : u8 {
Auto = 0x0,
AsciiForm = 0x1,
MultipartForm = 0x2,
};
enum class PostDataType : u8 {
AsciiForm = 0x0,
MultipartForm = 0x1,
Raw = 0x2,
};
enum class ClientCertID : u32 {
Default = 0x40, // Default client cert
};
@@ -197,6 +211,41 @@ public:
friend class boost::serialization::access;
};
struct Param {
Param(const std::vector<u8>& value)
: name(value.begin(), value.end()), value(value.begin(), value.end()){};
Param(const std::string& name, const std::string& value) : name(name), value(value){};
Param(const std::string& name, const std::vector<u8>& value)
: name(name), value(value.begin(), value.end()), is_binary(true){};
std::string name;
std::string value;
bool is_binary = false;
httplib::MultipartFormData ToMultipartForm() const {
httplib::MultipartFormData form;
form.name = name;
form.content = value;
if (is_binary) {
form.content_type = "application/octet-stream";
// TODO(DaniElectra): httplib doesn't support setting Content-Transfer-Encoding,
// while the 3DS sets Content-Transfer-Encoding: binary if a binary value is set
}
return form;
}
private:
template <class Archive>
void serialize(Archive& ar, const unsigned int) {
ar& name;
ar& value;
ar& is_binary;
}
friend class boost::serialization::access;
};
using Params = std::multimap<std::string, Param>;
Handle handle;
u32 session_id;
std::string url;
@@ -208,8 +257,14 @@ public:
u32 socket_buffer_size;
std::vector<RequestHeader> headers;
const ClCertAData* clcert_data;
httplib::Params post_data;
Params post_data;
std::string post_data_raw;
PostDataEncoding post_data_encoding = PostDataEncoding::Auto;
PostDataType post_data_type;
std::string multipart_boundary;
bool force_multipart = false;
bool chunked_request = false;
u32 chunked_content_length;
std::future<void> request_future;
std::atomic<u64> current_download_size_bytes;
@@ -217,12 +272,19 @@ public:
std::size_t current_copied_data;
bool uses_default_client_cert{};
httplib::Response response;
Common::Event finish_post_data;
void ParseAsciiPostData();
std::string ParseMultipartFormData();
void MakeRequest();
void MakeRequestNonSSL(httplib::Request& request, const URLInfo& url_info,
std::vector<Context::RequestHeader>& pending_headers);
void MakeRequestSSL(httplib::Request& request, const URLInfo& url_info,
std::vector<Context::RequestHeader>& pending_headers);
bool ContentProvider(size_t offset, size_t length, httplib::DataSink& sink);
bool ChunkedContentProvider(size_t offset, httplib::DataSink& sink);
std::size_t HandleHeaderWrite(std::vector<Context::RequestHeader>& pending_headers,
httplib::Stream& strm, httplib::Headers& httplib_headers);
};
struct SessionData : public Kernel::SessionRequestHandler::SessionDataBase {
@@ -308,6 +370,16 @@ private:
*/
void CancelConnection(Kernel::HLERequestContext& ctx);
/**
* HTTP_C::GetRequestState service function
* Inputs:
* 1 : Context handle
* Outputs:
* 1 : Result of function, 0 on success, otherwise error code
* 2 : Request state
*/
void GetRequestState(Kernel::HLERequestContext& ctx);
/**
* HTTP_C::GetDownloadSizeState service function
* Inputs:
@@ -418,6 +490,21 @@ private:
*/
void AddPostDataAscii(Kernel::HLERequestContext& ctx);
/**
* HTTP_C::AddPostDataBinary service function
* Inputs:
* 1 : Context handle
* 2 : Form name buffer size, including null-terminator.
* 3 : Form value buffer size
* 4 : (FormNameSize<<14) | 0xC02
* 5 : Form name data pointer
* 6 : (FormValueSize<<4) | 10
* 7 : Form value data pointer
* Outputs:
* 1 : Result of function, 0 on success, otherwise error code
*/
void AddPostDataBinary(Kernel::HLERequestContext& ctx);
/**
* HTTP_C::AddPostDataRaw service function
* Inputs:
@@ -430,6 +517,140 @@ private:
*/
void AddPostDataRaw(Kernel::HLERequestContext& ctx);
/**
* HTTP_C::SetPostDataType service function
* Inputs:
* 1 : Context handle
* 2 : Post data type
* Outputs:
* 1 : Result of function, 0 on success, otherwise error code
*/
void SetPostDataType(Kernel::HLERequestContext& ctx);
/**
* HTTP_C::SendPostDataAscii service function
* Inputs:
* 1 : Context handle
* 2 : Form name buffer size, including null-terminator.
* 3 : Form value buffer size, including null-terminator.
* 4 : (FormNameSize<<14) | 0xC02
* 5 : Form name data pointer
* 6 : (FormValueSize<<4) | 10
* 7 : Form value data pointer
* Outputs:
* 1 : Result of function, 0 on success, otherwise error code
*/
void SendPostDataAscii(Kernel::HLERequestContext& ctx);
/**
* HTTP_C::SendPostDataAsciiTimeout service function
* Inputs:
* 1 : Context handle
* 2 : Form name buffer size, including null-terminator.
* 3 : Form value buffer size, including null-terminator.
* 4-5 : u64 nanoseconds delay
* 6 : (FormNameSize<<14) | 0xC02
* 7 : Form name data pointer
* 8 : (FormValueSize<<4) | 10
* 9 : Form value data pointer
* Outputs:
* 1 : Result of function, 0 on success, otherwise error code
*/
void SendPostDataAsciiTimeout(Kernel::HLERequestContext& ctx);
/**
* SendPostDataAsciiImpl:
* Implements SendPostDataAscii and SendPostDataAsciiTimeout service functions
*/
void SendPostDataAsciiImpl(Kernel::HLERequestContext& ctx, bool timeout);
/**
* HTTP_C::SendPostDataBinary service function
* Inputs:
* 1 : Context handle
* 2 : Form name buffer size, including null-terminator.
* 3 : Form value buffer size
* 4 : (FormNameSize<<14) | 0xC02
* 5 : Form name data pointer
* 6 : (FormValueSize<<4) | 10
* 7 : Form value data pointer
* Outputs:
* 1 : Result of function, 0 on success, otherwise error code
*/
void SendPostDataBinary(Kernel::HLERequestContext& ctx);
/**
* HTTP_C::SendPostDataBinaryTimeout service function
* Inputs:
* 1 : Context handle
* 2 : Form name buffer size, including null-terminator.
* 3 : Form value buffer size
* 4-5 : u64 nanoseconds delay
* 6 : (FormNameSize<<14) | 0xC02
* 7 : Form name data pointer
* 8 : (FormValueSize<<4) | 10
* 9 : Form value data pointer
* Outputs:
* 1 : Result of function, 0 on success, otherwise error code
*/
void SendPostDataBinaryTimeout(Kernel::HLERequestContext& ctx);
/**
* SendPostDataBinaryImpl:
* Implements SendPostDataBinary and SendPostDataBinaryTimeout service functions
*/
void SendPostDataBinaryImpl(Kernel::HLERequestContext& ctx, bool timeout);
/**
* HTTP_C::SendPostDataRaw service function
* Inputs:
* 1 : Context handle
* 2 : Post data length
* 3-4: (Mapped buffer) Post data
* Outputs:
* 1 : Result of function, 0 on success, otherwise error code
* 2-3: (Mapped buffer) Post data
*/
void SendPostDataRaw(Kernel::HLERequestContext& ctx);
/**
* HTTP_C::SendPostDataRawTimeout service function
* Inputs:
* 1 : Context handle
* 2 : Post data length
* 3-4: u64 nanoseconds delay
* 5-6: (Mapped buffer) Post data
* Outputs:
* 1 : Result of function, 0 on success, otherwise error code
* 2-3: (Mapped buffer) Post data
*/
void SendPostDataRawTimeout(Kernel::HLERequestContext& ctx);
/**
* SendPostDataRawImpl:
* Implements SendPostDataRaw and SendPostDataRawTimeout service functions
*/
void SendPostDataRawImpl(Kernel::HLERequestContext& ctx, bool timeout);
/**
* HTTP_C::NotifyFinishSendPostData service function
* Inputs:
* 1 : Context handle
* Outputs:
* 1 : Result of function, 0 on success, otherwise error code
*/
void NotifyFinishSendPostData(Kernel::HLERequestContext& ctx);
/**
* HTTP_C::SetPostDataEncoding service function
* Inputs:
* 1 : Context handle
* 2 : Post data encoding
* Outputs:
* 1 : Result of function, 0 on success, otherwise error code
*/
void SetPostDataEncoding(Kernel::HLERequestContext& ctx);
/**
* HTTP_C::GetResponseHeader service function
* Inputs:
@@ -445,6 +666,28 @@ private:
*/
void GetResponseHeader(Kernel::HLERequestContext& ctx);
/**
* HTTP_C::GetResponseHeaderTimeout service function
* Inputs:
* 1 : Context handle
* 2 : Header name length
* 3 : Return value length
* 4-5 : u64 nanoseconds delay
* 6-7 : (Static buffer) Header name
* 8-9 : (Mapped buffer) Header value
* Outputs:
* 1 : Result of function, 0 on success, otherwise error code
* 2 : Header value copied size
* 3-4: (Mapped buffer) Header value
*/
void GetResponseHeaderTimeout(Kernel::HLERequestContext& ctx);
/**
* GetResponseHeaderImpl:
* Implements GetResponseHeader and GetResponseHeaderTimeout service functions
*/
void GetResponseHeaderImpl(Kernel::HLERequestContext& ctx, bool timeout);
/**
* HTTP_C::GetResponseStatusCode service function
* Inputs:
@@ -578,6 +821,17 @@ private:
*/
void SetKeepAlive(Kernel::HLERequestContext& ctx);
/**
* HTTP_C::SetPostDataTypeSize service function
* Inputs:
* 1 : Context handle
* 2 : Post data type
* 3 : Content length size
* Outputs:
* 1 : Result of function, 0 on success, otherwise error code
*/
void SetPostDataTypeSize(Kernel::HLERequestContext& ctx);
/**
* HTTP_C::Finalize service function
* Outputs:

View File

@@ -0,0 +1,16 @@
// Copyright 2024 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "core/core.h"
#include "core/hle/service/mcu/mcu.h"
#include "core/hle/service/mcu/mcu_hwc.h"
namespace Service::MCU {
void InstallInterfaces(Core::System& system) {
auto& service_manager = system.ServiceManager();
std::make_shared<HWC>()->InstallAsService(service_manager);
}
} // namespace Service::MCU

View File

@@ -0,0 +1,15 @@
// Copyright 2024 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
namespace Core {
class System;
}
namespace Service::MCU {
void InstallInterfaces(Core::System& system);
} // namespace Service::MCU

View File

@@ -0,0 +1,36 @@
// Copyright 2024 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/archives.h"
#include "core/hle/service/mcu/mcu_hwc.h"
SERIALIZE_EXPORT_IMPL(Service::MCU::HWC)
namespace Service::MCU {
HWC::HWC() : ServiceFramework("mcu::HWC", 1) {
static const FunctionInfo functions[] = {
// clang-format off
{0x0001, nullptr, "ReadRegister"},
{0x0002, nullptr, "WriteRegister"},
{0x0003, nullptr, "GetInfoRegisters"},
{0x0004, nullptr, "GetBatteryVoltage"},
{0x0005, nullptr, "GetBatteryLevel"},
{0x0006, nullptr, "SetPowerLEDPattern"},
{0x0007, nullptr, "SetWifiLEDState"},
{0x0008, nullptr, "SetCameraLEDPattern"},
{0x0009, nullptr, "Set3DLEDState"},
{0x000A, nullptr, "SetInfoLEDPattern"},
{0x000B, nullptr, "GetSoundVolume"},
{0x000C, nullptr, "SetTopScreenFlicker"},
{0x000D, nullptr, "SetBottomScreenFlicker"},
{0x000F, nullptr, "GetRtcTime"},
{0x0010, nullptr, "GetMcuFwVerHigh"},
{0x0011, nullptr, "GetMcuFwVerLow"},
// clang-format on
};
RegisterHandlers(functions);
}
} // namespace Service::MCU

View File

@@ -0,0 +1,21 @@
// Copyright 2024 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include "core/hle/service/service.h"
namespace Service::MCU {
class HWC final : public ServiceFramework<HWC> {
public:
explicit HWC();
private:
SERVICE_SERIALIZATION_SIMPLE
};
} // namespace Service::MCU
BOOST_CLASS_EXPORT_KEY(Service::MCU::HWC)

View File

@@ -35,6 +35,7 @@
#include "core/hle/service/http/http_c.h"
#include "core/hle/service/ir/ir.h"
#include "core/hle/service/ldr_ro/ldr_ro.h"
#include "core/hle/service/mcu/mcu.h"
#include "core/hle/service/mic/mic_u.h"
#include "core/hle/service/mvd/mvd.h"
#include "core/hle/service/ndm/ndm_u.h"
@@ -101,7 +102,7 @@ const std::array<ServiceModuleInfo, 41> service_module_map{
{"CDC", 0x00040130'00001802, nullptr},
{"GPIO", 0x00040130'00001B02, nullptr},
{"I2C", 0x00040130'00001E02, nullptr},
{"MCU", 0x00040130'00001F02, nullptr},
{"MCU", 0x00040130'00001F02, MCU::InstallInterfaces},
{"MP", 0x00040130'00002A02, nullptr},
{"PDN", 0x00040130'00002102, nullptr},
{"SPI", 0x00040130'00002302, nullptr}}};

View File

@@ -598,9 +598,10 @@ static_assert(std::is_trivially_copyable_v<CTRPollFD>,
union CTRSockAddr {
/// Structure to represent a raw sockaddr
struct {
u8 len; ///< The length of the entire structure, only the set fields count
u8 sa_family; ///< The address family of the sockaddr
u8 sa_data[0x1A]; ///< The extra data, this varies, depending on the address family
u8 len; ///< The length of the entire structure, only the set fields count
u8 sa_family; ///< The address family of the sockaddr
std::array<u8, 0x1A>
sa_data; ///< The extra data, this varies, depending on the address family
} raw;
/// Structure to represent the 3ds' sockaddr_in structure
@@ -612,36 +613,57 @@ union CTRSockAddr {
} in;
static_assert(sizeof(CTRSockAddrIn) == 8, "Invalid CTRSockAddrIn size");
struct CTRSockAddrIn6 {
u8 len; ///< The length of the entire structure
u8 sin6_family; ///< The address family of the sockaddr_in6
u16 sin6_port; ///< The port associated with this sockaddr_in6
std::array<u8, 0x10> sin6_addr; ///< The actual address of the sockaddr_in6
u32 sin6_flowinfo; ///< The flow info of the sockaddr_in6
u32 sin6_scope_id; ///< The scope ID of the sockaddr_in6
} in6;
static_assert(sizeof(CTRSockAddrIn6) == 28, "Invalid CTRSockAddrIn6 size");
/// Convert a 3DS CTRSockAddr to a platform-specific sockaddr
static sockaddr ToPlatform(CTRSockAddr const& ctr_addr) {
sockaddr result;
ASSERT_MSG(ctr_addr.raw.len == sizeof(CTRSockAddrIn),
static std::pair<sockaddr_storage, socklen_t> ToPlatform(CTRSockAddr const& ctr_addr) {
sockaddr_storage result{};
socklen_t result_len = sizeof(result.ss_family);
ASSERT_MSG(ctr_addr.raw.len == sizeof(CTRSockAddrIn) ||
ctr_addr.raw.len == sizeof(CTRSockAddrIn6),
"Unhandled address size (len) in CTRSockAddr::ToPlatform");
result.sa_family = SocketDomainToPlatform(ctr_addr.raw.sa_family);
std::memset(result.sa_data, 0, sizeof(result.sa_data));
result.ss_family = SocketDomainToPlatform(ctr_addr.raw.sa_family);
// We can not guarantee ABI compatibility between platforms so we copy the fields manually
switch (result.sa_family) {
switch (result.ss_family) {
case AF_INET: {
sockaddr_in* result_in = reinterpret_cast<sockaddr_in*>(&result);
result_in->sin_port = ctr_addr.in.sin_port;
result_in->sin_addr.s_addr = ctr_addr.in.sin_addr;
std::memset(result_in->sin_zero, 0, sizeof(result_in->sin_zero));
result_len = sizeof(sockaddr_in);
break;
}
case AF_INET6: {
sockaddr_in6* result_in6 = reinterpret_cast<sockaddr_in6*>(&result);
result_in6->sin6_port = ctr_addr.in6.sin6_port;
memcpy(&result_in6->sin6_addr, ctr_addr.in6.sin6_addr.data(),
sizeof(result_in6->sin6_addr));
result_in6->sin6_flowinfo = ctr_addr.in6.sin6_flowinfo;
result_in6->sin6_scope_id = ctr_addr.in6.sin6_scope_id;
result_len = sizeof(sockaddr_in6);
break;
}
default:
ASSERT_MSG(false, "Unhandled address family (sa_family) in CTRSockAddr::ToPlatform");
break;
}
return result;
return std::make_pair(result, result_len);
}
/// Convert a platform-specific sockaddr to a 3DS CTRSockAddr
static CTRSockAddr FromPlatform(sockaddr const& addr) {
static CTRSockAddr FromPlatform(sockaddr_storage const& addr) {
CTRSockAddr result;
result.raw.sa_family = static_cast<u8>(SocketDomainFromPlatform(addr.sa_family));
result.raw.sa_family = static_cast<u8>(SocketDomainFromPlatform(addr.ss_family));
// We can not guarantee ABI compatibility between platforms so we copy the fields manually
switch (addr.sa_family) {
switch (addr.ss_family) {
case AF_INET: {
sockaddr_in const* addr_in = reinterpret_cast<sockaddr_in const*>(&addr);
result.raw.len = sizeof(CTRSockAddrIn);
@@ -649,6 +671,15 @@ union CTRSockAddr {
result.in.sin_addr = addr_in->sin_addr.s_addr;
break;
}
case AF_INET6: {
sockaddr_in6 const* addr_in6 = reinterpret_cast<sockaddr_in6 const*>(&addr);
result.raw.len = sizeof(CTRSockAddrIn6);
result.in6.sin6_port = addr_in6->sin6_port;
memcpy(result.in6.sin6_addr.data(), &addr_in6->sin6_addr, sizeof(result.in6.sin6_addr));
result.in6.sin6_flowinfo = addr_in6->sin6_flowinfo;
result.in6.sin6_scope_id = addr_in6->sin6_scope_id;
break;
}
default:
ASSERT_MSG(false, "Unhandled address family (sa_family) in CTRSockAddr::ToPlatform");
break;
@@ -707,7 +738,8 @@ struct CTRAddrInfo {
.ai_family = static_cast<s32_le>(SocketDomainFromPlatform(addr.ai_family)),
.ai_socktype = static_cast<s32_le>(SocketTypeFromPlatform(addr.ai_socktype)),
.ai_protocol = static_cast<s32_le>(SocketProtocolFromPlatform(addr.ai_protocol)),
.ai_addr = CTRSockAddr::FromPlatform(*addr.ai_addr),
.ai_addr =
CTRSockAddr::FromPlatform(*reinterpret_cast<sockaddr_storage*>(addr.ai_addr)),
};
ctr_addr.ai_addrlen = static_cast<s32_le>(ctr_addr.ai_addr.raw.len);
if (addr.ai_canonname)
@@ -840,9 +872,9 @@ void SOC_U::Bind(Kernel::HLERequestContext& ctx) {
CTRSockAddr ctr_sock_addr;
std::memcpy(&ctr_sock_addr, sock_addr_buf.data(), std::min<size_t>(len, sizeof(ctr_sock_addr)));
sockaddr sock_addr = CTRSockAddr::ToPlatform(ctr_sock_addr);
auto [sock_addr, sock_addr_len] = CTRSockAddr::ToPlatform(ctr_sock_addr);
s32 ret = ::bind(holder.socket_fd, &sock_addr, sizeof(sock_addr));
s32 ret = ::bind(holder.socket_fd, reinterpret_cast<sockaddr*>(&sock_addr), sock_addr_len);
if (ret != 0)
ret = TranslateError(GET_ERRNO);
@@ -937,7 +969,7 @@ void SOC_U::Accept(Kernel::HLERequestContext& ctx) {
// Output
s32 ret{};
int accept_error;
sockaddr addr;
sockaddr_storage addr;
};
auto async_data = std::make_shared<AsyncData>();
@@ -950,7 +982,8 @@ void SOC_U::Accept(Kernel::HLERequestContext& ctx) {
[async_data](Kernel::HLERequestContext& ctx) {
socklen_t addr_len = sizeof(async_data->addr);
async_data->ret = static_cast<u32>(
::accept(async_data->fd_info->socket_fd, &async_data->addr, &addr_len));
::accept(async_data->fd_info->socket_fd,
reinterpret_cast<sockaddr*>(&async_data->addr), &addr_len));
async_data->accept_error = (async_data->ret == SOCKET_ERROR_VALUE) ? GET_ERRNO : 0;
return 0;
},
@@ -1109,10 +1142,10 @@ void SOC_U::SendToOther(Kernel::HLERequestContext& ctx) {
CTRSockAddr ctr_dest_addr;
std::memcpy(&ctr_dest_addr, dest_addr_buffer.data(),
std::min<size_t>(addr_len, sizeof(ctr_dest_addr)));
sockaddr dest_addr = CTRSockAddr::ToPlatform(ctr_dest_addr);
ret = static_cast<s32>(::sendto(holder.socket_fd,
reinterpret_cast<const char*>(input_buff.data()), len,
flags, &dest_addr, sizeof(dest_addr)));
auto [dest_addr, dest_addr_len] = CTRSockAddr::ToPlatform(ctr_dest_addr);
ret = static_cast<s32>(
::sendto(holder.socket_fd, reinterpret_cast<const char*>(input_buff.data()), len, flags,
reinterpret_cast<sockaddr*>(&dest_addr), dest_addr_len));
} else {
ret = static_cast<s32>(::sendto(holder.socket_fd,
reinterpret_cast<const char*>(input_buff.data()), len,
@@ -1159,10 +1192,10 @@ s32 SOC_U::SendToImpl(SocketHolder& holder, u32 len, u32 flags, u32 addr_len,
CTRSockAddr ctr_dest_addr;
std::memcpy(&ctr_dest_addr, dest_addr_buff,
std::min<size_t>(addr_len, sizeof(ctr_dest_addr)));
sockaddr dest_addr = CTRSockAddr::ToPlatform(ctr_dest_addr);
ret = static_cast<s32>(::sendto(holder.socket_fd,
reinterpret_cast<const char*>(input_buff.data()), len,
flags, &dest_addr, sizeof(dest_addr)));
auto [dest_addr, dest_addr_len] = CTRSockAddr::ToPlatform(ctr_dest_addr);
ret = static_cast<s32>(
::sendto(holder.socket_fd, reinterpret_cast<const char*>(input_buff.data()), len, flags,
reinterpret_cast<sockaddr*>(&dest_addr), dest_addr_len));
} else {
ret = static_cast<s32>(::sendto(holder.socket_fd,
reinterpret_cast<const char*>(input_buff.data()), len,
@@ -1294,7 +1327,7 @@ void SOC_U::RecvFromOther(Kernel::HLERequestContext& ctx) {
ctx.RunAsync(
[async_data](Kernel::HLERequestContext& ctx) {
sockaddr src_addr;
sockaddr_storage src_addr;
socklen_t src_addr_len = sizeof(src_addr);
CTRSockAddr ctr_src_addr;
// Windows, why do you have to be so special...
@@ -1302,10 +1335,10 @@ void SOC_U::RecvFromOther(Kernel::HLERequestContext& ctx) {
RecvBusyWaitForEvent(*async_data->fd_info);
}
if (async_data->addr_len > 0) {
async_data->ret = static_cast<s32>(
::recvfrom(async_data->fd_info->socket_fd,
reinterpret_cast<char*>(async_data->output_buff.data()),
async_data->len, async_data->flags, &src_addr, &src_addr_len));
async_data->ret = static_cast<s32>(::recvfrom(
async_data->fd_info->socket_fd,
reinterpret_cast<char*>(async_data->output_buff.data()), async_data->len,
async_data->flags, reinterpret_cast<sockaddr*>(&src_addr), &src_addr_len));
if (async_data->ret >= 0 && src_addr_len > 0) {
ctr_src_addr = CTRSockAddr::FromPlatform(src_addr);
std::memcpy(async_data->addr_buff.data(), &ctr_src_addr,
@@ -1411,7 +1444,7 @@ void SOC_U::RecvFrom(Kernel::HLERequestContext& ctx) {
ctx.RunAsync(
[async_data](Kernel::HLERequestContext& ctx) {
sockaddr src_addr;
sockaddr_storage src_addr;
socklen_t src_addr_len = sizeof(src_addr);
CTRSockAddr ctr_src_addr;
if (async_data->is_blocking) {
@@ -1419,10 +1452,10 @@ void SOC_U::RecvFrom(Kernel::HLERequestContext& ctx) {
}
if (async_data->addr_len > 0) {
// Only get src adr if input adr available
async_data->ret = static_cast<s32>(
::recvfrom(async_data->fd_info->socket_fd,
reinterpret_cast<char*>(async_data->output_buff.data()),
async_data->len, async_data->flags, &src_addr, &src_addr_len));
async_data->ret = static_cast<s32>(::recvfrom(
async_data->fd_info->socket_fd,
reinterpret_cast<char*>(async_data->output_buff.data()), async_data->len,
async_data->flags, reinterpret_cast<sockaddr*>(&src_addr), &src_addr_len));
if (async_data->ret >= 0 && src_addr_len > 0) {
ctr_src_addr = CTRSockAddr::FromPlatform(src_addr);
std::memcpy(async_data->addr_buff.data(), &ctr_src_addr,
@@ -1558,9 +1591,10 @@ void SOC_U::GetSockName(Kernel::HLERequestContext& ctx) {
}
SocketHolder& holder = socket_holder_optional->get();
sockaddr dest_addr;
sockaddr_storage dest_addr;
socklen_t dest_addr_len = sizeof(dest_addr);
s32 ret = ::getsockname(holder.socket_fd, &dest_addr, &dest_addr_len);
s32 ret =
::getsockname(holder.socket_fd, reinterpret_cast<sockaddr*>(&dest_addr), &dest_addr_len);
CTRSockAddr ctr_dest_addr = CTRSockAddr::FromPlatform(dest_addr);
std::vector<u8> dest_addr_buff(sizeof(ctr_dest_addr));
@@ -1647,10 +1681,11 @@ void SOC_U::GetHostByAddr(Kernel::HLERequestContext& ctx) {
[[maybe_unused]] u32 out_buf_len = rp.Pop<u32>();
auto addr = rp.PopStaticBuffer();
sockaddr platform_addr = CTRSockAddr::ToPlatform(*reinterpret_cast<CTRSockAddr*>(addr.data()));
auto [platform_addr, platform_addr_len] =
CTRSockAddr::ToPlatform(*reinterpret_cast<CTRSockAddr*>(addr.data()));
struct hostent* result =
::gethostbyaddr(reinterpret_cast<char*>(&platform_addr), sizeof(platform_addr), type);
::gethostbyaddr(reinterpret_cast<char*>(&platform_addr), platform_addr_len, type);
IPC::RequestBuilder rb = rp.MakeBuilder(2, 2);
rb.Push(ResultSuccess);
@@ -1698,9 +1733,10 @@ void SOC_U::GetPeerName(Kernel::HLERequestContext& ctx) {
}
SocketHolder& holder = socket_holder_optional->get();
sockaddr dest_addr;
sockaddr_storage dest_addr;
socklen_t dest_addr_len = sizeof(dest_addr);
const int ret = ::getpeername(holder.socket_fd, &dest_addr, &dest_addr_len);
const int ret =
::getpeername(holder.socket_fd, reinterpret_cast<sockaddr*>(&dest_addr), &dest_addr_len);
CTRSockAddr ctr_dest_addr = CTRSockAddr::FromPlatform(dest_addr);
std::vector<u8> dest_addr_buff(sizeof(ctr_dest_addr));
@@ -1741,7 +1777,7 @@ void SOC_U::Connect(Kernel::HLERequestContext& ctx) {
struct AsyncData {
// Input
SocketHolder* fd_info;
sockaddr input_addr;
std::pair<sockaddr_storage, socklen_t> input_addr;
u32 socket_handle;
u32 pid;
@@ -1763,8 +1799,9 @@ void SOC_U::Connect(Kernel::HLERequestContext& ctx) {
ctx.RunAsync(
[async_data](Kernel::HLERequestContext& ctx) {
async_data->ret = ::connect(async_data->fd_info->socket_fd, &async_data->input_addr,
sizeof(async_data->input_addr));
async_data->ret = ::connect(async_data->fd_info->socket_fd,
reinterpret_cast<sockaddr*>(&async_data->input_addr.first),
async_data->input_addr.second);
async_data->connect_error = (async_data->ret == SOCKET_ERROR_VALUE) ? GET_ERRNO : 0;
return 0;
},
@@ -2047,14 +2084,15 @@ void SOC_U::GetNameInfoImpl(Kernel::HLERequestContext& ctx) {
CTRSockAddr ctr_sa;
std::memcpy(&ctr_sa, sa_buff.data(), socklen);
sockaddr sa = CTRSockAddr::ToPlatform(ctr_sa);
auto [sa, sa_len] = CTRSockAddr::ToPlatform(ctr_sa);
std::vector<u8> host(hostlen);
std::vector<u8> serv(servlen);
char* host_data = hostlen > 0 ? reinterpret_cast<char*>(host.data()) : nullptr;
char* serv_data = servlen > 0 ? reinterpret_cast<char*>(serv.data()) : nullptr;
s32 ret = getnameinfo(&sa, sizeof(sa), host_data, hostlen, serv_data, servlen, flags);
s32 ret = getnameinfo(reinterpret_cast<sockaddr*>(&sa), sa_len, host_data, hostlen, serv_data,
servlen, flags);
if (ret == SOCKET_ERROR_VALUE) {
ret = TranslateError(GET_ERRNO);
}

View File

@@ -429,18 +429,20 @@ Common::ParamPackage SDLState::GetSDLControllerButtonBindByGUID(
#if SDL_VERSION_ATLEAST(2, 0, 6)
{
const SDL_ExtendedGameControllerBind extended_bind =
controller->bindings[mapped_button];
if (extended_bind.input.axis.axis_max < extended_bind.input.axis.axis_min) {
params.Set("direction", "-");
} else {
params.Set("direction", "+");
if (mapped_button != SDL_CONTROLLER_BUTTON_INVALID) {
const SDL_ExtendedGameControllerBind extended_bind =
controller->bindings[mapped_button];
if (extended_bind.input.axis.axis_max < extended_bind.input.axis.axis_min) {
params.Set("direction", "-");
} else {
params.Set("direction", "+");
}
params.Set("threshold", (extended_bind.input.axis.axis_min +
(extended_bind.input.axis.axis_max -
extended_bind.input.axis.axis_min) /
2.0f) /
SDL_JOYSTICK_AXIS_MAX);
}
params.Set(
"threshold",
(extended_bind.input.axis.axis_min +
(extended_bind.input.axis.axis_max - extended_bind.input.axis.axis_min) / 2.0f) /
SDL_JOYSTICK_AXIS_MAX);
}
#else
params.Set("direction", "+"); // lacks extended_bind, so just a guess

View File

@@ -9,6 +9,7 @@ add_executable(tests
core/memory/vm_manager.cpp
precompiled_headers.h
audio_core/hle/hle.cpp
audio_core/hle/source.cpp
audio_core/lle/lle.cpp
audio_core/audio_fixures.h
audio_core/decoder_tests.cpp

View File

@@ -0,0 +1,379 @@
#include <cstdio>
#include <catch2/catch_template_test_macros.hpp>
#include "audio_core/hle/shared_memory.h"
#include "common/settings.h"
#include "tests/audio_core/merryhime_3ds_audio/merry_audio/merry_audio.h"
TEST_CASE_METHOD(MerryAudio::MerryAudioFixture, "Verify SourceStatus::Status::last_buffer_id 1",
"[audio_core][hle]") {
// World's worst triangle wave generator.
// Generates PCM16.
auto fillBuffer = [this](u32* audio_buffer, size_t size, unsigned freq) {
for (size_t i = 0; i < size; i++) {
u32 data = (i % freq) * 256;
audio_buffer[i] = (data << 16) | (data & 0xFFFF);
}
DSP_FlushDataCache(audio_buffer, size);
};
constexpr size_t NUM_SAMPLES = 160 * 1;
u32* audio_buffer = (u32*)linearAlloc(NUM_SAMPLES * sizeof(u32));
fillBuffer(audio_buffer, NUM_SAMPLES, 160);
u32* audio_buffer2 = (u32*)linearAlloc(NUM_SAMPLES * sizeof(u32));
fillBuffer(audio_buffer2, NUM_SAMPLES, 80);
u32* audio_buffer3 = (u32*)linearAlloc(NUM_SAMPLES * sizeof(u32));
fillBuffer(audio_buffer3, NUM_SAMPLES, 40);
MerryAudio::AudioState state;
{
std::vector<u8> dspfirm;
SECTION("HLE") {
// The test case assumes HLE AudioCore doesn't require a valid firmware
InitDspCore(Settings::AudioEmulation::HLE);
dspfirm = {0};
}
SECTION("LLE Sanity") {
InitDspCore(Settings::AudioEmulation::LLE);
dspfirm = loadDspFirmFromFile();
}
if (!dspfirm.size()) {
SKIP("Couldn't load firmware\n");
return;
}
auto ret = audioInit(dspfirm);
if (!ret) {
INFO("Couldn't init audio\n");
goto end;
}
state = *ret;
}
state.waitForSync();
initSharedMem(state);
state.notifyDsp();
state.waitForSync();
state.notifyDsp();
state.waitForSync();
state.notifyDsp();
state.waitForSync();
state.notifyDsp();
state.waitForSync();
state.notifyDsp();
{
u16 buffer_id = 0;
size_t next_queue_position = 0;
state.write().source_configurations->config[0].play_position = 0;
state.write().source_configurations->config[0].physical_address =
osConvertVirtToPhys(audio_buffer3);
state.write().source_configurations->config[0].length = NUM_SAMPLES;
state.write().source_configurations->config[0].mono_or_stereo.Assign(
AudioCore::HLE::SourceConfiguration::Configuration::MonoOrStereo::Stereo);
state.write().source_configurations->config[0].format.Assign(
AudioCore::HLE::SourceConfiguration::Configuration::Format::PCM16);
state.write().source_configurations->config[0].fade_in.Assign(false);
state.write().source_configurations->config[0].adpcm_dirty.Assign(false);
state.write().source_configurations->config[0].is_looping.Assign(false);
state.write().source_configurations->config[0].buffer_id = ++buffer_id;
state.write().source_configurations->config[0].partial_reset_flag.Assign(true);
state.write().source_configurations->config[0].play_position_dirty.Assign(true);
state.write().source_configurations->config[0].embedded_buffer_dirty.Assign(true);
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.physical_address = osConvertVirtToPhys(buffer_id % 2 ? audio_buffer2 : audio_buffer);
state.write().source_configurations->config[0].buffers[next_queue_position].length =
NUM_SAMPLES;
state.write().source_configurations->config[0].buffers[next_queue_position].adpcm_dirty =
false;
state.write().source_configurations->config[0].buffers[next_queue_position].is_looping =
false;
state.write().source_configurations->config[0].buffers[next_queue_position].buffer_id =
++buffer_id;
state.write().source_configurations->config[0].buffers_dirty |= 1 << next_queue_position;
next_queue_position = (next_queue_position + 1) % 4;
state.write().source_configurations->config[0].buffer_queue_dirty.Assign(true);
state.write().source_configurations->config[0].enable = true;
state.write().source_configurations->config[0].enable_dirty.Assign(true);
state.notifyDsp();
for (size_t frame_count = 0; frame_count < 10; frame_count++) {
state.waitForSync();
if (!state.read().source_statuses->status[0].is_enabled) {
state.write().source_configurations->config[0].enable = true;
state.write().source_configurations->config[0].enable_dirty.Assign(true);
}
if (state.read().source_statuses->status[0].current_buffer_id_dirty) {
if (state.read().source_statuses->status[0].current_buffer_id == buffer_id ||
state.read().source_statuses->status[0].current_buffer_id == 0) {
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.physical_address =
osConvertVirtToPhys(buffer_id % 2 ? audio_buffer2 : audio_buffer);
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.length = NUM_SAMPLES;
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.adpcm_dirty = false;
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.is_looping = false;
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.buffer_id = ++buffer_id;
state.write().source_configurations->config[0].buffers_dirty |=
1 << next_queue_position;
next_queue_position = (next_queue_position + 1) % 4;
state.write().source_configurations->config[0].buffer_queue_dirty.Assign(true);
}
}
state.notifyDsp();
}
// current_buffer_id should be 0 if the queue is not empty
REQUIRE(state.read().source_statuses->status[0].last_buffer_id == 0);
// Let the queue finish playing
for (size_t frame_count = 0; frame_count < 10; frame_count++) {
state.waitForSync();
state.notifyDsp();
}
// TODO: There seems to be some nuances with how the LLE firmware runs the buffer queue,
// that differs from the HLE implementation
// REQUIRE(state.read().source_statuses->status[0].last_buffer_id == 5);
// current_buffer_id should be equal to buffer_id once the queue is empty
REQUIRE(state.read().source_statuses->status[0].last_buffer_id == buffer_id);
}
end:
audioExit(state);
}
TEST_CASE_METHOD(MerryAudio::MerryAudioFixture, "Verify SourceStatus::Status::last_buffer_id 2",
"[audio_core][hle]") {
// World's worst triangle wave generator.
// Generates PCM16.
auto fillBuffer = [this](u32* audio_buffer, size_t size, unsigned freq) {
for (size_t i = 0; i < size; i++) {
u32 data = (i % freq) * 256;
audio_buffer[i] = (data << 16) | (data & 0xFFFF);
}
DSP_FlushDataCache(audio_buffer, size);
};
constexpr size_t NUM_SAMPLES = 160 * 1;
u32* audio_buffer = (u32*)linearAlloc(NUM_SAMPLES * sizeof(u32));
fillBuffer(audio_buffer, NUM_SAMPLES, 160);
u32* audio_buffer2 = (u32*)linearAlloc(NUM_SAMPLES * sizeof(u32));
fillBuffer(audio_buffer2, NUM_SAMPLES, 80);
u32* audio_buffer3 = (u32*)linearAlloc(NUM_SAMPLES * sizeof(u32));
fillBuffer(audio_buffer3, NUM_SAMPLES, 40);
MerryAudio::AudioState state;
{
std::vector<u8> dspfirm;
SECTION("HLE") {
// The test case assumes HLE AudioCore doesn't require a valid firmware
InitDspCore(Settings::AudioEmulation::HLE);
dspfirm = {0};
}
SECTION("LLE Sanity") {
InitDspCore(Settings::AudioEmulation::LLE);
dspfirm = loadDspFirmFromFile();
}
if (!dspfirm.size()) {
SKIP("Couldn't load firmware\n");
return;
}
auto ret = audioInit(dspfirm);
if (!ret) {
INFO("Couldn't init audio\n");
goto end;
}
state = *ret;
}
state.waitForSync();
initSharedMem(state);
state.notifyDsp();
state.waitForSync();
state.notifyDsp();
state.waitForSync();
state.notifyDsp();
state.waitForSync();
state.notifyDsp();
state.waitForSync();
state.notifyDsp();
{
u16 buffer_id = 0;
size_t next_queue_position = 0;
state.write().source_configurations->config[0].play_position = 0;
state.write().source_configurations->config[0].physical_address =
osConvertVirtToPhys(audio_buffer3);
state.write().source_configurations->config[0].length = NUM_SAMPLES;
state.write().source_configurations->config[0].mono_or_stereo.Assign(
AudioCore::HLE::SourceConfiguration::Configuration::MonoOrStereo::Stereo);
state.write().source_configurations->config[0].format.Assign(
AudioCore::HLE::SourceConfiguration::Configuration::Format::PCM16);
state.write().source_configurations->config[0].fade_in.Assign(false);
state.write().source_configurations->config[0].adpcm_dirty.Assign(false);
state.write().source_configurations->config[0].is_looping.Assign(false);
state.write().source_configurations->config[0].buffer_id = ++buffer_id;
state.write().source_configurations->config[0].partial_reset_flag.Assign(true);
state.write().source_configurations->config[0].play_position_dirty.Assign(true);
state.write().source_configurations->config[0].embedded_buffer_dirty.Assign(true);
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.physical_address = osConvertVirtToPhys(buffer_id % 2 ? audio_buffer2 : audio_buffer);
state.write().source_configurations->config[0].buffers[next_queue_position].length =
NUM_SAMPLES;
state.write().source_configurations->config[0].buffers[next_queue_position].adpcm_dirty =
false;
state.write().source_configurations->config[0].buffers[next_queue_position].is_looping =
false;
state.write().source_configurations->config[0].buffers[next_queue_position].buffer_id =
++buffer_id;
state.write().source_configurations->config[0].buffers_dirty |= 1 << next_queue_position;
next_queue_position = (next_queue_position + 1) % 4;
state.write().source_configurations->config[0].buffer_queue_dirty.Assign(true);
state.write().source_configurations->config[0].enable = true;
state.write().source_configurations->config[0].enable_dirty.Assign(true);
state.notifyDsp();
for (size_t frame_count = 0; frame_count < 10; frame_count++) {
state.waitForSync();
if (!state.read().source_statuses->status[0].is_enabled) {
state.write().source_configurations->config[0].enable = true;
state.write().source_configurations->config[0].enable_dirty.Assign(true);
}
if (state.read().source_statuses->status[0].current_buffer_id_dirty) {
if (state.read().source_statuses->status[0].current_buffer_id == buffer_id ||
state.read().source_statuses->status[0].current_buffer_id == 0) {
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.physical_address =
osConvertVirtToPhys(buffer_id % 2 ? audio_buffer2 : audio_buffer);
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.length = NUM_SAMPLES;
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.adpcm_dirty = false;
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.is_looping = false;
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.buffer_id = ++buffer_id;
state.write().source_configurations->config[0].buffers_dirty |=
1 << next_queue_position;
next_queue_position = (next_queue_position + 1) % 4;
state.write().source_configurations->config[0].buffer_queue_dirty.Assign(true);
}
}
state.notifyDsp();
}
// current_buffer_id should be 0 if the queue is not empty
REQUIRE(state.read().source_statuses->status[0].last_buffer_id == 0);
// Let the queue finish playing
for (size_t frame_count = 0; frame_count < 10; frame_count++) {
state.waitForSync();
state.notifyDsp();
}
// TODO: There seems to be some nuances with how the LLE firmware runs the buffer queue,
// that differs from the HLE implementation
// REQUIRE(state.read().source_statuses->status[0].last_buffer_id == 5);
// current_buffer_id should be equal to buffer_id once the queue is empty
REQUIRE(state.read().source_statuses->status[0].last_buffer_id == buffer_id);
// Restart Playing
for (size_t frame_count = 0; frame_count < 10; frame_count++) {
state.waitForSync();
if (!state.read().source_statuses->status[0].is_enabled) {
state.write().source_configurations->config[0].enable = true;
state.write().source_configurations->config[0].enable_dirty.Assign(true);
}
if (state.read().source_statuses->status[0].current_buffer_id_dirty) {
if (state.read().source_statuses->status[0].current_buffer_id == buffer_id ||
state.read().source_statuses->status[0].current_buffer_id == 0) {
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.physical_address =
osConvertVirtToPhys(buffer_id % 2 ? audio_buffer2 : audio_buffer);
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.length = NUM_SAMPLES;
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.adpcm_dirty = false;
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.is_looping = false;
state.write()
.source_configurations->config[0]
.buffers[next_queue_position]
.buffer_id = ++buffer_id;
state.write().source_configurations->config[0].buffers_dirty |=
1 << next_queue_position;
next_queue_position = (next_queue_position + 1) % 4;
state.write().source_configurations->config[0].buffer_queue_dirty.Assign(true);
}
}
state.notifyDsp();
}
// current_buffer_id should be 0 if the queue is not empty
REQUIRE(state.read().source_statuses->status[0].last_buffer_id == 0);
// Let the queue finish playing
for (size_t frame_count = 0; frame_count < 10; frame_count++) {
state.waitForSync();
state.notifyDsp();
}
// current_buffer_id should be equal to buffer_id once the queue is empty
REQUIRE(state.read().source_statuses->status[0].last_buffer_id == buffer_id);
}
end:
audioExit(state);
}

View File

@@ -160,8 +160,8 @@ if (ENABLE_VULKAN)
renderer_vulkan/vk_blit_helper.h
renderer_vulkan/vk_common.cpp
renderer_vulkan/vk_common.h
renderer_vulkan/vk_descriptor_pool.cpp
renderer_vulkan/vk_descriptor_pool.h
renderer_vulkan/vk_descriptor_update_queue.cpp
renderer_vulkan/vk_descriptor_update_queue.h
renderer_vulkan/vk_graphics_pipeline.cpp
renderer_vulkan/vk_graphics_pipeline.h
renderer_vulkan/vk_master_semaphore.cpp
@@ -183,8 +183,8 @@ if (ENABLE_VULKAN)
renderer_vulkan/vk_platform.h
renderer_vulkan/vk_present_window.cpp
renderer_vulkan/vk_present_window.h
renderer_vulkan/vk_renderpass_cache.cpp
renderer_vulkan/vk_renderpass_cache.h
renderer_vulkan/vk_render_manager.cpp
renderer_vulkan/vk_render_manager.h
renderer_vulkan/vk_shader_util.cpp
renderer_vulkan/vk_shader_util.h
renderer_vulkan/vk_stream_buffer.cpp

View File

@@ -385,7 +385,7 @@ std::vector<FileUtil::FSTEntry> CustomTexManager::GetTextures(u64 title_id) {
}
void CustomTexManager::CreateWorkers() {
const std::size_t num_workers = std::max(std::thread::hardware_concurrency(), 2U) - 1;
const std::size_t num_workers = std::max(std::thread::hardware_concurrency(), 2U) >> 1;
workers = std::make_unique<Common::ThreadWorker>(num_workers, "Custom textures");
}

View File

@@ -176,15 +176,15 @@ struct TexturingRegs {
INSERT_PADDING_WORDS(0x9);
struct FullTextureConfig {
const bool enabled;
const u32 enabled;
const TextureConfig config;
const TextureFormat format;
};
const std::array<FullTextureConfig, 3> GetTextures() const {
return {{
{static_cast<bool>(main_config.texture0_enable), texture0, texture0_format},
{static_cast<bool>(main_config.texture1_enable), texture1, texture1_format},
{static_cast<bool>(main_config.texture2_enable), texture2, texture2_format},
{main_config.texture0_enable, texture0, texture0_format},
{main_config.texture1_enable, texture1, texture1_format},
{main_config.texture2_enable, texture2, texture2_format},
}};
}

View File

@@ -846,7 +846,7 @@ void RasterizerAccelerated::SyncTextureBorderColor(int tex_index) {
}
void RasterizerAccelerated::SyncClipPlane() {
const bool enable_clip1 = regs.rasterizer.clip_enable != 0;
const u32 enable_clip1 = regs.rasterizer.clip_enable != 0;
const auto raw_clip_coef = regs.rasterizer.GetClipCoef();
const Common::Vec4f new_clip_coef = {raw_clip_coef.x.ToFloat32(), raw_clip_coef.y.ToFloat32(),
raw_clip_coef.z.ToFloat32(), raw_clip_coef.w.ToFloat32()};

View File

@@ -600,14 +600,43 @@ typename T::Surface& RasterizerCache<T>::GetTextureCube(const TextureCubeConfig&
auto [it, new_surface] = texture_cube_cache.try_emplace(config);
TextureCube& cube = it->second;
const std::array addresses = {config.px, config.nx, config.py, config.ny, config.pz, config.nz};
if (new_surface) {
Pica::Texture::TextureInfo info = {
.width = config.width,
.height = config.width,
.format = config.format,
};
info.SetDefaultStride();
u32 res_scale = 1;
for (u32 i = 0; i < addresses.size(); i++) {
if (!addresses[i]) {
continue;
}
SurfaceId& face_id = cube.face_ids[i];
if (!face_id) {
info.physical_address = addresses[i];
face_id = GetTextureSurface(info, config.levels - 1);
Surface& surface = slot_surfaces[face_id];
ASSERT_MSG(
surface.levels >= config.levels,
"Texture cube face levels are not enough to validate the levels requested");
surface.flags |= SurfaceFlagBits::Tracked;
}
Surface& surface = slot_surfaces[face_id];
res_scale = std::max(surface.res_scale, res_scale);
}
SurfaceParams cube_params = {
.addr = config.px,
.width = config.width,
.height = config.width,
.stride = config.width,
.levels = config.levels,
.res_scale = filter != Settings::TextureFilter::None ? resolution_scale_factor : 1,
.res_scale = res_scale,
.texture_type = TextureType::CubeMap,
.pixel_format = PixelFormatFromTextureFormat(config.format),
.type = SurfaceType::Texture,
@@ -616,38 +645,20 @@ typename T::Surface& RasterizerCache<T>::GetTextureCube(const TextureCubeConfig&
cube.surface_id = CreateSurface(cube_params);
}
const u32 scaled_size = slot_surfaces[cube.surface_id].GetScaledWidth();
const std::array addresses = {config.px, config.nx, config.py, config.ny, config.pz, config.nz};
Pica::Texture::TextureInfo info = {
.width = config.width,
.height = config.width,
.format = config.format,
};
info.SetDefaultStride();
Surface& cube_surface = slot_surfaces[cube.surface_id];
for (u32 i = 0; i < addresses.size(); i++) {
if (!addresses[i]) {
continue;
}
SurfaceId& face_id = cube.face_ids[i];
if (!face_id) {
info.physical_address = addresses[i];
face_id = GetTextureSurface(info, config.levels - 1);
ASSERT_MSG(slot_surfaces[face_id].levels >= config.levels,
"Texture cube face levels are not enough to validate the levels requested");
}
Surface& surface = slot_surfaces[face_id];
surface.flags |= SurfaceFlagBits::Tracked;
Surface& surface = slot_surfaces[cube.face_ids[i]];
if (cube.ticks[i] == surface.modification_tick) {
continue;
}
cube.ticks[i] = surface.modification_tick;
Surface& cube_surface = slot_surfaces[cube.surface_id];
boost::container::small_vector<TextureCopy, 8> upload_copies;
for (u32 level = 0; level < config.levels; level++) {
const u32 width_lod = scaled_size >> level;
const TextureCopy texture_copy = {
const u32 width_lod = surface.GetScaledWidth() >> level;
upload_copies.push_back({
.src_level = level,
.dst_level = level,
.src_layer = 0,
@@ -655,9 +666,9 @@ typename T::Surface& RasterizerCache<T>::GetTextureCube(const TextureCubeConfig&
.src_offset = {0, 0},
.dst_offset = {0, 0},
.extent = {width_lod, width_lod},
};
runtime.CopyTextures(surface, cube_surface, texture_copy);
});
}
runtime.CopyTextures(surface, cube_surface, upload_copies);
}
return slot_surfaces[cube.surface_id];

View File

@@ -260,16 +260,19 @@ void TextureRuntime::ClearTexture(Surface& surface, const VideoCore::TextureClea
}
bool TextureRuntime::CopyTextures(Surface& source, Surface& dest,
const VideoCore::TextureCopy& copy) {
std::span<const VideoCore::TextureCopy> copies) {
const GLenum src_textarget = source.texture_type == VideoCore::TextureType::CubeMap
? GL_TEXTURE_CUBE_MAP
: GL_TEXTURE_2D;
const GLenum dest_textarget =
dest.texture_type == VideoCore::TextureType::CubeMap ? GL_TEXTURE_CUBE_MAP : GL_TEXTURE_2D;
glCopyImageSubData(source.Handle(), src_textarget, copy.src_level, copy.src_offset.x,
copy.src_offset.y, copy.src_layer, dest.Handle(), dest_textarget,
copy.dst_level, copy.dst_offset.x, copy.dst_offset.y, copy.dst_layer,
copy.extent.width, copy.extent.height, 1);
for (const auto& copy : copies) {
glCopyImageSubData(source.Handle(), src_textarget, copy.src_level, copy.src_offset.x,
copy.src_offset.y, copy.src_layer, dest.Handle(), dest_textarget,
copy.dst_level, copy.dst_offset.x, copy.dst_offset.y, copy.dst_layer,
copy.extent.width, copy.extent.height, 1);
}
return true;
}

View File

@@ -65,7 +65,12 @@ public:
void ClearTexture(Surface& surface, const VideoCore::TextureClear& clear);
/// Copies a rectangle of source to another rectange of dest
bool CopyTextures(Surface& source, Surface& dest, const VideoCore::TextureCopy& copy);
bool CopyTextures(Surface& source, Surface& dest,
std::span<const VideoCore::TextureCopy> copies);
bool CopyTextures(Surface& source, Surface& dest, const VideoCore::TextureCopy& copy) {
return CopyTextures(source, dest, std::array{copy});
}
/// Blits a rectangle of source to another rectange of dest
bool BlitTextures(Surface& source, Surface& dest, const VideoCore::TextureBlit& blit);

View File

@@ -148,8 +148,6 @@ inline GLenum BlendFunc(Pica::FramebufferRegs::BlendFactor factor) {
// Range check table for input
if (index >= blend_func_table.size()) {
LOG_CRITICAL(Render_OpenGL, "Unknown blend factor {}", index);
UNREACHABLE();
return GL_ONE;
}

View File

@@ -695,7 +695,7 @@ Common::Vec4<u8> RasterizerSoftware::WriteTevConfig(
* with some basic arithmetic. Alpha combiners can be configured separately but work
* analogously.
**/
Common::Vec4<u8> combiner_output = primary_color;
Common::Vec4<u8> combiner_output = {0, 0, 0, 0};
Common::Vec4<u8> combiner_buffer = {0, 0, 0, 0};
Common::Vec4<u8> next_combiner_buffer =
Common::MakeVec(regs.texturing.tev_combiner_buffer_color.r.Value(),
@@ -746,9 +746,15 @@ Common::Vec4<u8> RasterizerSoftware::WriteTevConfig(
* combiner_output.rgb(), but instead store it in a temporary variable until
* alpha combining has been done.
**/
const auto source1 = tev_stage_index == 0 && tev_stage.color_source1 == Source::Previous
? tev_stage.color_source3.Value()
: tev_stage.color_source1.Value();
const auto source2 = tev_stage_index == 0 && tev_stage.color_source2 == Source::Previous
? tev_stage.color_source3.Value()
: tev_stage.color_source2.Value();
const std::array<Common::Vec3<u8>, 3> color_result = {
GetColorModifier(tev_stage.color_modifier1, get_source(tev_stage.color_source1)),
GetColorModifier(tev_stage.color_modifier2, get_source(tev_stage.color_source2)),
GetColorModifier(tev_stage.color_modifier1, get_source(source1)),
GetColorModifier(tev_stage.color_modifier2, get_source(source2)),
GetColorModifier(tev_stage.color_modifier3, get_source(tev_stage.color_source3)),
};
const Common::Vec3<u8> color_output = ColorCombine(tev_stage.color_op, color_result);

View File

@@ -96,7 +96,10 @@ inline vk::BlendFactor BlendFunc(Pica::FramebufferRegs::BlendFactor factor) {
}};
const auto index = static_cast<std::size_t>(factor);
ASSERT_MSG(index < blend_func_table.size(), "Unknown blend factor {}", index);
if (index >= blend_func_table.size()) {
LOG_CRITICAL(Render_Vulkan, "Unknown blend factor {}", index);
return vk::BlendFactor::eOne;
}
return blend_func_table[index];
}

View File

@@ -54,21 +54,21 @@ RendererVulkan::RendererVulkan(Core::System& system, Pica::PicaCore& pica_,
Frontend::EmuWindow& window, Frontend::EmuWindow* secondary_window)
: RendererBase{system, window, secondary_window}, memory{system.Memory()}, pica{pica_},
instance{system.TelemetrySession(), window, Settings::values.physical_device.GetValue()},
scheduler{instance}, renderpass_cache{instance, scheduler}, pool{instance},
main_window{window, instance, scheduler},
scheduler{instance}, render_manager{instance, scheduler}, main_window{window, instance,
scheduler},
vertex_buffer{instance, scheduler, vk::BufferUsageFlagBits::eVertexBuffer,
VERTEX_BUFFER_SIZE},
rasterizer{memory,
pica,
system.CustomTexManager(),
*this,
render_window,
instance,
scheduler,
pool,
renderpass_cache,
main_window.ImageCount()},
present_set_provider{instance, pool, PRESENT_BINDINGS} {
update_queue{instance}, rasterizer{memory,
pica,
system.CustomTexManager(),
*this,
render_window,
instance,
scheduler,
render_manager,
update_queue,
main_window.ImageCount()},
present_heap{instance, scheduler.GetMasterSemaphore(), PRESENT_BINDINGS, 32} {
CompileShaders();
BuildLayouts();
BuildPipelines();
@@ -127,16 +127,14 @@ void RendererVulkan::PrepareRendertarget() {
void RendererVulkan::PrepareDraw(Frame* frame, const Layout::FramebufferLayout& layout) {
const auto sampler = present_samplers[!Settings::values.filter_mode.GetValue()];
std::transform(screen_infos.begin(), screen_infos.end(), present_textures.begin(),
[&](auto& info) {
return DescriptorData{vk::DescriptorImageInfo{sampler, info.image_view,
vk::ImageLayout::eGeneral}};
});
const auto present_set = present_heap.Commit();
for (u32 index = 0; index < screen_infos.size(); index++) {
update_queue.AddImageSampler(present_set, 0, index, screen_infos[index].image_view,
sampler);
}
const auto descriptor_set = present_set_provider.Acquire(present_textures);
renderpass_cache.EndRendering();
scheduler.Record([this, layout, frame, descriptor_set, renderpass = main_window.Renderpass(),
render_manager.EndRendering();
scheduler.Record([this, layout, frame, present_set, renderpass = main_window.Renderpass(),
index = current_pipeline](vk::CommandBuffer cmdbuf) {
const vk::Viewport viewport = {
.x = 0.0f,
@@ -171,7 +169,7 @@ void RendererVulkan::PrepareDraw(Frame* frame, const Layout::FramebufferLayout&
cmdbuf.beginRenderPass(renderpass_begin_info, vk::SubpassContents::eInline);
cmdbuf.bindPipeline(vk::PipelineBindPoint::eGraphics, present_pipelines[index]);
cmdbuf.bindDescriptorSets(vk::PipelineBindPoint::eGraphics, layout, 0, descriptor_set, {});
cmdbuf.bindDescriptorSets(vk::PipelineBindPoint::eGraphics, layout, 0, present_set, {});
});
}
@@ -258,7 +256,7 @@ void RendererVulkan::BuildLayouts() {
.size = sizeof(PresentUniformData),
};
const auto descriptor_set_layout = present_set_provider.Layout();
const auto descriptor_set_layout = present_heap.Layout();
const vk::PipelineLayoutCreateInfo layout_info = {
.setLayoutCount = 1,
.pSetLayouts = &descriptor_set_layout,
@@ -466,7 +464,7 @@ void RendererVulkan::FillScreen(Common::Vec3<u8> color, const TextureInfo& textu
},
};
renderpass_cache.EndRendering();
render_manager.EndRendering();
scheduler.Record([image = texture.image, clear_color](vk::CommandBuffer cmdbuf) {
const vk::ImageSubresourceRange range = {
.aspectMask = vk::ImageAspectFlagBits::eColor,

View File

@@ -7,11 +7,10 @@
#include "common/common_types.h"
#include "common/math_util.h"
#include "video_core/renderer_base.h"
#include "video_core/renderer_vulkan/vk_descriptor_pool.h"
#include "video_core/renderer_vulkan/vk_instance.h"
#include "video_core/renderer_vulkan/vk_present_window.h"
#include "video_core/renderer_vulkan/vk_rasterizer.h"
#include "video_core/renderer_vulkan/vk_renderpass_cache.h"
#include "video_core/renderer_vulkan/vk_render_manager.h"
#include "video_core/renderer_vulkan/vk_scheduler.h"
namespace Core {
@@ -118,15 +117,15 @@ private:
Instance instance;
Scheduler scheduler;
RenderpassCache renderpass_cache;
DescriptorPool pool;
RenderManager render_manager;
PresentWindow main_window;
StreamBuffer vertex_buffer;
DescriptorUpdateQueue update_queue;
RasterizerVulkan rasterizer;
std::unique_ptr<PresentWindow> second_window;
DescriptorHeap present_heap;
vk::UniquePipelineLayout present_pipeline_layout;
DescriptorSetProvider present_set_provider;
std::array<vk::Pipeline, PRESENT_PIPELINES> present_pipelines;
std::array<vk::ShaderModule, PRESENT_PIPELINES> present_shaders;
std::array<vk::Sampler, 2> present_samplers;
@@ -134,7 +133,6 @@ private:
u32 current_pipeline = 0;
std::array<ScreenInfo, 3> screen_infos{};
std::array<DescriptorData, 3> present_textures{};
PresentUniformData draw_info{};
vk::ClearColorValue clear_color{};
};

View File

@@ -4,8 +4,9 @@
#include "common/vector_math.h"
#include "video_core/renderer_vulkan/vk_blit_helper.h"
#include "video_core/renderer_vulkan/vk_descriptor_update_queue.h"
#include "video_core/renderer_vulkan/vk_instance.h"
#include "video_core/renderer_vulkan/vk_renderpass_cache.h"
#include "video_core/renderer_vulkan/vk_render_manager.h"
#include "video_core/renderer_vulkan/vk_scheduler.h"
#include "video_core/renderer_vulkan/vk_shader_util.h"
#include "video_core/renderer_vulkan/vk_texture_runtime.h"
@@ -177,12 +178,13 @@ constexpr vk::PipelineShaderStageCreateInfo MakeStages(vk::ShaderModule compute_
} // Anonymous namespace
BlitHelper::BlitHelper(const Instance& instance_, Scheduler& scheduler_, DescriptorPool& pool,
RenderpassCache& renderpass_cache_)
: instance{instance_}, scheduler{scheduler_}, renderpass_cache{renderpass_cache_},
device{instance.GetDevice()}, compute_provider{instance, pool, COMPUTE_BINDINGS},
compute_buffer_provider{instance, pool, COMPUTE_BUFFER_BINDINGS},
two_textures_provider{instance, pool, TWO_TEXTURES_BINDINGS},
BlitHelper::BlitHelper(const Instance& instance_, Scheduler& scheduler_,
RenderManager& render_manager_, DescriptorUpdateQueue& update_queue_)
: instance{instance_}, scheduler{scheduler_}, render_manager{render_manager_},
update_queue{update_queue_}, device{instance.GetDevice()},
compute_provider{instance, scheduler.GetMasterSemaphore(), COMPUTE_BINDINGS},
compute_buffer_provider{instance, scheduler.GetMasterSemaphore(), COMPUTE_BUFFER_BINDINGS},
two_textures_provider{instance, scheduler.GetMasterSemaphore(), TWO_TEXTURES_BINDINGS, 16},
compute_pipeline_layout{
device.createPipelineLayout(PipelineLayoutCreateInfo(&compute_provider.Layout(), true))},
compute_buffer_pipeline_layout{device.createPipelineLayout(
@@ -282,27 +284,16 @@ bool BlitHelper::BlitDepthStencil(Surface& source, Surface& dest,
.extent = {dest.GetScaledWidth(), dest.GetScaledHeight()},
};
std::array<DescriptorData, 2> textures{};
textures[0].image_info = vk::DescriptorImageInfo{
.sampler = nearest_sampler,
.imageView = source.DepthView(),
.imageLayout = vk::ImageLayout::eGeneral,
};
textures[1].image_info = vk::DescriptorImageInfo{
.sampler = nearest_sampler,
.imageView = source.StencilView(),
.imageLayout = vk::ImageLayout::eGeneral,
};
const auto descriptor_set = two_textures_provider.Acquire(textures);
const auto descriptor_set = two_textures_provider.Commit();
update_queue.AddImageSampler(descriptor_set, 0, 0, source.DepthView(), nearest_sampler);
update_queue.AddImageSampler(descriptor_set, 1, 0, source.StencilView(), nearest_sampler);
const RenderPass depth_pass = {
.framebuffer = dest.Framebuffer(),
.render_pass =
renderpass_cache.GetRenderpass(PixelFormat::Invalid, dest.pixel_format, false),
.render_pass = render_manager.GetRenderpass(PixelFormat::Invalid, dest.pixel_format, false),
.render_area = dst_render_area,
};
renderpass_cache.BeginRendering(depth_pass);
render_manager.BeginRendering(depth_pass);
scheduler.Record([blit, descriptor_set, this](vk::CommandBuffer cmdbuf) {
const vk::PipelineLayout layout = two_textures_pipeline_layout;
@@ -318,23 +309,14 @@ bool BlitHelper::BlitDepthStencil(Surface& source, Surface& dest,
bool BlitHelper::ConvertDS24S8ToRGBA8(Surface& source, Surface& dest,
const VideoCore::TextureCopy& copy) {
std::array<DescriptorData, 3> textures{};
textures[0].image_info = vk::DescriptorImageInfo{
.imageView = source.DepthView(),
.imageLayout = vk::ImageLayout::eDepthStencilReadOnlyOptimal,
};
textures[1].image_info = vk::DescriptorImageInfo{
.imageView = source.StencilView(),
.imageLayout = vk::ImageLayout::eDepthStencilReadOnlyOptimal,
};
textures[2].image_info = vk::DescriptorImageInfo{
.imageView = dest.ImageView(),
.imageLayout = vk::ImageLayout::eGeneral,
};
const auto descriptor_set = compute_provider.Commit();
update_queue.AddImageSampler(descriptor_set, 0, 0, source.DepthView(), VK_NULL_HANDLE,
vk::ImageLayout::eDepthStencilReadOnlyOptimal);
update_queue.AddImageSampler(descriptor_set, 1, 0, source.StencilView(), VK_NULL_HANDLE,
vk::ImageLayout::eDepthStencilReadOnlyOptimal);
update_queue.AddStorageImage(descriptor_set, 2, dest.ImageView());
const auto descriptor_set = compute_provider.Acquire(textures);
renderpass_cache.EndRendering();
render_manager.EndRendering();
scheduler.Record([this, descriptor_set, copy, src_image = source.Image(),
dst_image = dest.Image()](vk::CommandBuffer cmdbuf) {
const std::array pre_barriers = {
@@ -438,26 +420,15 @@ bool BlitHelper::ConvertDS24S8ToRGBA8(Surface& source, Surface& dest,
bool BlitHelper::DepthToBuffer(Surface& source, vk::Buffer buffer,
const VideoCore::BufferTextureCopy& copy) {
std::array<DescriptorData, 3> textures{};
textures[0].image_info = vk::DescriptorImageInfo{
.sampler = nearest_sampler,
.imageView = source.DepthView(),
.imageLayout = vk::ImageLayout::eDepthStencilReadOnlyOptimal,
};
textures[1].image_info = vk::DescriptorImageInfo{
.sampler = nearest_sampler,
.imageView = source.StencilView(),
.imageLayout = vk::ImageLayout::eDepthStencilReadOnlyOptimal,
};
textures[2].buffer_info = vk::DescriptorBufferInfo{
.buffer = buffer,
.offset = copy.buffer_offset,
.range = copy.buffer_size,
};
const auto descriptor_set = compute_buffer_provider.Commit();
update_queue.AddImageSampler(descriptor_set, 0, 0, source.DepthView(), nearest_sampler,
vk::ImageLayout::eDepthStencilReadOnlyOptimal);
update_queue.AddImageSampler(descriptor_set, 1, 0, source.StencilView(), nearest_sampler,
vk::ImageLayout::eDepthStencilReadOnlyOptimal);
update_queue.AddBuffer(descriptor_set, 2, buffer, copy.buffer_offset, copy.buffer_size,
vk::DescriptorType::eStorageBuffer);
const auto descriptor_set = compute_buffer_provider.Acquire(textures);
renderpass_cache.EndRendering();
render_manager.EndRendering();
scheduler.Record([this, descriptor_set, copy, src_image = source.Image(),
extent = source.RealExtent(false)](vk::CommandBuffer cmdbuf) {
const vk::ImageMemoryBarrier pre_barrier = {
@@ -543,8 +514,8 @@ vk::Pipeline BlitHelper::MakeDepthStencilBlitPipeline() {
}
const std::array stages = MakeStages(full_screen_vert, blit_depth_stencil_frag);
const auto renderpass = renderpass_cache.GetRenderpass(VideoCore::PixelFormat::Invalid,
VideoCore::PixelFormat::D24S8, false);
const auto renderpass = render_manager.GetRenderpass(VideoCore::PixelFormat::Invalid,
VideoCore::PixelFormat::D24S8, false);
vk::GraphicsPipelineCreateInfo depth_stencil_info = {
.stageCount = static_cast<u32>(stages.size()),
.pStages = stages.data(),

View File

@@ -4,7 +4,7 @@
#pragma once
#include "video_core/renderer_vulkan/vk_descriptor_pool.h"
#include "video_core/renderer_vulkan/vk_resource_pool.h"
namespace VideoCore {
struct TextureBlit;
@@ -15,16 +15,17 @@ struct BufferTextureCopy;
namespace Vulkan {
class Instance;
class RenderpassCache;
class RenderManager;
class Scheduler;
class Surface;
class DescriptorUpdateQueue;
class BlitHelper {
friend class TextureRuntime;
public:
BlitHelper(const Instance& instance, Scheduler& scheduler, DescriptorPool& pool,
RenderpassCache& renderpass_cache);
explicit BlitHelper(const Instance& instance, Scheduler& scheduler,
RenderManager& render_manager, DescriptorUpdateQueue& update_queue);
~BlitHelper();
bool BlitDepthStencil(Surface& source, Surface& dest, const VideoCore::TextureBlit& blit);
@@ -41,14 +42,15 @@ private:
private:
const Instance& instance;
Scheduler& scheduler;
RenderpassCache& renderpass_cache;
RenderManager& render_manager;
DescriptorUpdateQueue& update_queue;
vk::Device device;
vk::RenderPass r32_renderpass;
DescriptorSetProvider compute_provider;
DescriptorSetProvider compute_buffer_provider;
DescriptorSetProvider two_textures_provider;
DescriptorHeap compute_provider;
DescriptorHeap compute_buffer_provider;
DescriptorHeap two_textures_provider;
vk::PipelineLayout compute_pipeline_layout;
vk::PipelineLayout compute_buffer_pipeline_layout;
vk::PipelineLayout two_textures_pipeline_layout;

View File

@@ -9,7 +9,6 @@
#define VK_NO_PROTOTYPES
#define VULKAN_HPP_DISPATCH_LOADER_DYNAMIC 1
#define VULKAN_HPP_NO_CONSTRUCTORS
#define VULKAN_HPP_NO_UNION_CONSTRUCTORS
#define VULKAN_HPP_NO_STRUCT_SETTERS
#include <vulkan/vulkan.hpp>

View File

@@ -1,141 +0,0 @@
// Copyright 2023 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "common/microprofile.h"
#include "video_core/renderer_vulkan/vk_descriptor_pool.h"
#include "video_core/renderer_vulkan/vk_instance.h"
namespace Vulkan {
MICROPROFILE_DEFINE(Vulkan_DescriptorSetAcquire, "Vulkan", "Descriptor Set Acquire",
MP_RGB(64, 128, 256));
constexpr u32 MAX_BATCH_SIZE = 8;
DescriptorPool::DescriptorPool(const Instance& instance_) : instance{instance_} {
auto& pool = pools.emplace_back();
pool = CreatePool();
}
DescriptorPool::~DescriptorPool() = default;
std::vector<vk::DescriptorSet> DescriptorPool::Allocate(vk::DescriptorSetLayout layout,
u32 num_sets) {
std::array<vk::DescriptorSetLayout, MAX_BATCH_SIZE> layouts;
layouts.fill(layout);
u32 current_pool = 0;
vk::DescriptorSetAllocateInfo alloc_info = {
.descriptorPool = *pools[current_pool],
.descriptorSetCount = num_sets,
.pSetLayouts = layouts.data(),
};
while (true) {
try {
return instance.GetDevice().allocateDescriptorSets(alloc_info);
} catch (const vk::OutOfPoolMemoryError&) {
current_pool++;
if (current_pool == pools.size()) {
LOG_INFO(Render_Vulkan, "Run out of pools, creating new one!");
auto& pool = pools.emplace_back();
pool = CreatePool();
}
alloc_info.descriptorPool = *pools[current_pool];
}
}
}
vk::DescriptorSet DescriptorPool::Allocate(vk::DescriptorSetLayout layout) {
const auto sets = Allocate(layout, 1);
return sets[0];
}
vk::UniqueDescriptorPool DescriptorPool::CreatePool() {
// Choose a sane pool size good for most games
static constexpr std::array<vk::DescriptorPoolSize, 6> pool_sizes = {{
{vk::DescriptorType::eUniformBufferDynamic, 64},
{vk::DescriptorType::eUniformTexelBuffer, 64},
{vk::DescriptorType::eCombinedImageSampler, 4096},
{vk::DescriptorType::eSampledImage, 256},
{vk::DescriptorType::eStorageImage, 256},
{vk::DescriptorType::eStorageBuffer, 32},
}};
const vk::DescriptorPoolCreateInfo descriptor_pool_info = {
.maxSets = 4098,
.poolSizeCount = static_cast<u32>(pool_sizes.size()),
.pPoolSizes = pool_sizes.data(),
};
return instance.GetDevice().createDescriptorPoolUnique(descriptor_pool_info);
}
DescriptorSetProvider::DescriptorSetProvider(
const Instance& instance, DescriptorPool& pool_,
std::span<const vk::DescriptorSetLayoutBinding> bindings)
: pool{pool_}, device{instance.GetDevice()} {
std::array<vk::DescriptorUpdateTemplateEntry, MAX_DESCRIPTORS> update_entries;
for (u32 i = 0; i < bindings.size(); i++) {
update_entries[i] = vk::DescriptorUpdateTemplateEntry{
.dstBinding = bindings[i].binding,
.dstArrayElement = 0,
.descriptorCount = bindings[i].descriptorCount,
.descriptorType = bindings[i].descriptorType,
.offset = i * sizeof(DescriptorData),
.stride = sizeof(DescriptorData),
};
}
const vk::DescriptorSetLayoutCreateInfo layout_info = {
.bindingCount = static_cast<u32>(bindings.size()),
.pBindings = bindings.data(),
};
layout = device.createDescriptorSetLayoutUnique(layout_info);
const vk::DescriptorUpdateTemplateCreateInfo template_info = {
.descriptorUpdateEntryCount = static_cast<u32>(bindings.size()),
.pDescriptorUpdateEntries = update_entries.data(),
.templateType = vk::DescriptorUpdateTemplateType::eDescriptorSet,
.descriptorSetLayout = *layout,
};
update_template = device.createDescriptorUpdateTemplateUnique(template_info);
}
DescriptorSetProvider::~DescriptorSetProvider() = default;
vk::DescriptorSet DescriptorSetProvider::Acquire(std::span<const DescriptorData> data) {
MICROPROFILE_SCOPE(Vulkan_DescriptorSetAcquire);
DescriptorSetData key{};
std::memcpy(key.data(), data.data(), data.size_bytes());
const auto [it, new_set] = descriptor_set_map.try_emplace(key);
if (!new_set) {
return it->second;
}
if (free_sets.empty()) {
free_sets = pool.Allocate(*layout, MAX_BATCH_SIZE);
}
it.value() = free_sets.back();
free_sets.pop_back();
device.updateDescriptorSetWithTemplate(it->second, *update_template, data[0]);
return it->second;
}
void DescriptorSetProvider::FreeWithImage(vk::ImageView image_view) {
for (auto it = descriptor_set_map.begin(); it != descriptor_set_map.end();) {
const auto& [data, set] = *it;
const bool has_image = std::any_of(data.begin(), data.end(), [image_view](auto& info) {
return info.image_info.imageView == image_view;
});
if (has_image) {
free_sets.push_back(set);
it = descriptor_set_map.erase(it);
} else {
it++;
}
}
}
} // namespace Vulkan

View File

@@ -1,92 +0,0 @@
// Copyright 2023 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#pragma once
#include <span>
#include <vector>
#include <tsl/robin_map.h>
#include "common/hash.h"
#include "video_core/renderer_vulkan/vk_common.h"
namespace Vulkan {
class Instance;
constexpr u32 MAX_DESCRIPTORS = 7;
union DescriptorData {
vk::DescriptorImageInfo image_info;
vk::DescriptorBufferInfo buffer_info;
vk::BufferView buffer_view;
bool operator==(const DescriptorData& other) const noexcept {
return std::memcmp(this, &other, sizeof(DescriptorData)) == 0;
}
};
using DescriptorSetData = std::array<DescriptorData, MAX_DESCRIPTORS>;
struct DataHasher {
u64 operator()(const DescriptorSetData& data) const noexcept {
return Common::ComputeHash64(data.data(), sizeof(data));
}
};
/**
* An interface for allocating descriptor sets that manages a collection of descriptor pools.
*/
class DescriptorPool {
public:
explicit DescriptorPool(const Instance& instance);
~DescriptorPool();
std::vector<vk::DescriptorSet> Allocate(vk::DescriptorSetLayout layout, u32 num_sets);
vk::DescriptorSet Allocate(vk::DescriptorSetLayout layout);
private:
vk::UniqueDescriptorPool CreatePool();
private:
const Instance& instance;
std::vector<vk::UniqueDescriptorPool> pools;
};
/**
* Allocates and caches descriptor sets of a specific layout.
*/
class DescriptorSetProvider {
public:
explicit DescriptorSetProvider(const Instance& instance, DescriptorPool& pool,
std::span<const vk::DescriptorSetLayoutBinding> bindings);
~DescriptorSetProvider();
vk::DescriptorSet Acquire(std::span<const DescriptorData> data);
void FreeWithImage(vk::ImageView image_view);
[[nodiscard]] vk::DescriptorSetLayout Layout() const noexcept {
return *layout;
}
[[nodiscard]] vk::DescriptorSetLayout& Layout() noexcept {
return layout.get();
}
[[nodiscard]] vk::DescriptorUpdateTemplate UpdateTemplate() const noexcept {
return *update_template;
}
private:
DescriptorPool& pool;
vk::Device device;
vk::UniqueDescriptorSetLayout layout;
vk::UniqueDescriptorUpdateTemplate update_template;
std::vector<vk::DescriptorSet> free_sets;
tsl::robin_map<DescriptorSetData, vk::DescriptorSet, DataHasher> descriptor_set_map;
};
} // namespace Vulkan

View File

@@ -0,0 +1,109 @@
// Copyright 2024 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include "video_core/renderer_vulkan/vk_descriptor_update_queue.h"
#include "video_core/renderer_vulkan/vk_instance.h"
namespace Vulkan {
DescriptorUpdateQueue::DescriptorUpdateQueue(const Instance& instance, u32 descriptor_write_max_)
: device{instance.GetDevice()}, descriptor_write_max{descriptor_write_max_} {
descriptor_infos = std::make_unique<DescriptorInfoUnion[]>(descriptor_write_max);
descriptor_writes = std::make_unique<vk::WriteDescriptorSet[]>(descriptor_write_max);
}
void DescriptorUpdateQueue::Flush() {
if (descriptor_write_end == 0) {
return;
}
device.updateDescriptorSets({std::span(descriptor_writes.get(), descriptor_write_end)}, {});
descriptor_write_end = 0;
}
void DescriptorUpdateQueue::AddStorageImage(vk::DescriptorSet target, u8 binding,
vk::ImageView image_view,
vk::ImageLayout image_layout) {
if (descriptor_write_end >= descriptor_write_max) [[unlikely]] {
Flush();
}
auto& image_info = descriptor_infos[descriptor_write_end].image_info;
image_info.sampler = VK_NULL_HANDLE;
image_info.imageView = image_view;
image_info.imageLayout = image_layout;
descriptor_writes[descriptor_write_end++] = vk::WriteDescriptorSet{
.dstSet = target,
.dstBinding = binding,
.dstArrayElement = 0,
.descriptorCount = 1,
.descriptorType = vk::DescriptorType::eStorageImage,
.pImageInfo = &image_info,
};
}
void DescriptorUpdateQueue::AddImageSampler(vk::DescriptorSet target, u8 binding, u8 array_index,
vk::ImageView image_view, vk::Sampler sampler,
vk::ImageLayout image_layout) {
if (descriptor_write_end >= descriptor_write_max) [[unlikely]] {
Flush();
}
auto& image_info = descriptor_infos[descriptor_write_end].image_info;
image_info.sampler = sampler;
image_info.imageView = image_view;
image_info.imageLayout = image_layout;
descriptor_writes[descriptor_write_end++] = vk::WriteDescriptorSet{
.dstSet = target,
.dstBinding = binding,
.dstArrayElement = array_index,
.descriptorCount = 1,
.descriptorType =
sampler ? vk::DescriptorType::eCombinedImageSampler : vk::DescriptorType::eSampledImage,
.pImageInfo = &image_info,
};
}
void DescriptorUpdateQueue::AddBuffer(vk::DescriptorSet target, u8 binding, vk::Buffer buffer,
vk::DeviceSize offset, vk::DeviceSize size,
vk::DescriptorType type) {
if (descriptor_write_end >= descriptor_write_max) [[unlikely]] {
Flush();
}
auto& buffer_info = descriptor_infos[descriptor_write_end].buffer_info;
buffer_info.buffer = buffer;
buffer_info.offset = offset;
buffer_info.range = size;
descriptor_writes[descriptor_write_end++] = vk::WriteDescriptorSet{
.dstSet = target,
.dstBinding = binding,
.dstArrayElement = 0,
.descriptorCount = 1,
.descriptorType = type,
.pBufferInfo = &buffer_info,
};
}
void DescriptorUpdateQueue::AddTexelBuffer(vk::DescriptorSet target, u8 binding,
vk::BufferView buffer_view) {
if (descriptor_write_end >= descriptor_write_max) [[unlikely]] {
Flush();
}
auto& buffer_info = descriptor_infos[descriptor_write_end].buffer_view;
buffer_info = buffer_view;
descriptor_writes[descriptor_write_end++] = vk::WriteDescriptorSet{
.dstSet = target,
.dstBinding = binding,
.dstArrayElement = 0,
.descriptorCount = 1,
.descriptorType = vk::DescriptorType::eUniformTexelBuffer,
.pTexelBufferView = &buffer_info,
};
}
} // namespace Vulkan

View File

@@ -0,0 +1,53 @@
// Copyright 2024 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
#include <memory>
#include <variant>
#include "common/common_types.h"
#include "video_core/renderer_vulkan/vk_common.h"
namespace Vulkan {
class Instance;
struct DescriptorInfoUnion {
DescriptorInfoUnion() {}
union {
vk::DescriptorImageInfo image_info;
vk::DescriptorBufferInfo buffer_info;
vk::BufferView buffer_view;
};
};
class DescriptorUpdateQueue {
public:
explicit DescriptorUpdateQueue(const Instance& instance, u32 descriptor_write_max = 2048);
~DescriptorUpdateQueue() = default;
void Flush();
void AddStorageImage(vk::DescriptorSet target, u8 binding, vk::ImageView image_view,
vk::ImageLayout image_layout = vk::ImageLayout::eGeneral);
void AddImageSampler(vk::DescriptorSet target, u8 binding, u8 array_index,
vk::ImageView image_view, vk::Sampler sampler,
vk::ImageLayout imageLayout = vk::ImageLayout::eGeneral);
void AddBuffer(vk::DescriptorSet target, u8 binding, vk::Buffer buffer, vk::DeviceSize offset,
vk::DeviceSize size = VK_WHOLE_SIZE,
vk::DescriptorType type = vk::DescriptorType::eUniformBufferDynamic);
void AddTexelBuffer(vk::DescriptorSet target, u8 binding, vk::BufferView buffer_view);
private:
const vk::Device device;
const u32 descriptor_write_max;
std::unique_ptr<DescriptorInfoUnion[]> descriptor_infos;
std::unique_ptr<vk::WriteDescriptorSet[]> descriptor_writes;
u32 descriptor_write_end = 0;
};
} // namespace Vulkan

View File

@@ -9,7 +9,7 @@
#include "video_core/renderer_vulkan/pica_to_vk.h"
#include "video_core/renderer_vulkan/vk_graphics_pipeline.h"
#include "video_core/renderer_vulkan/vk_instance.h"
#include "video_core/renderer_vulkan/vk_renderpass_cache.h"
#include "video_core/renderer_vulkan/vk_render_manager.h"
#include "video_core/renderer_vulkan/vk_shader_util.h"
namespace Vulkan {
@@ -64,11 +64,11 @@ Shader::~Shader() {
}
}
GraphicsPipeline::GraphicsPipeline(const Instance& instance_, RenderpassCache& renderpass_cache_,
GraphicsPipeline::GraphicsPipeline(const Instance& instance_, RenderManager& render_manager_,
const PipelineInfo& info_, vk::PipelineCache pipeline_cache_,
vk::PipelineLayout layout_, std::array<Shader*, 3> stages_,
Common::ThreadWorker* worker_)
: instance{instance_}, renderpass_cache{renderpass_cache_}, worker{worker_},
: instance{instance_}, render_manager{render_manager_}, worker{worker_},
pipeline_layout{layout_}, pipeline_cache{pipeline_cache_}, info{info_}, stages{stages_} {}
GraphicsPipeline::~GraphicsPipeline() = default;
@@ -265,7 +265,7 @@ bool GraphicsPipeline::Build(bool fail_on_compile_required) {
.pDynamicState = &dynamic_info,
.layout = pipeline_layout,
.renderPass =
renderpass_cache.GetRenderpass(info.attachments.color, info.attachments.depth, false),
render_manager.GetRenderpass(info.attachments.color, info.attachments.depth, false),
};
if (fail_on_compile_required) {

View File

@@ -40,7 +40,7 @@ private:
namespace Vulkan {
class Instance;
class RenderpassCache;
class RenderManager;
constexpr u32 MAX_SHADER_STAGES = 3;
constexpr u32 MAX_VERTEX_ATTRIBUTES = 16;
@@ -126,7 +126,7 @@ struct AttachmentInfo {
};
/**
* Information about a graphics/compute pipeline
* Information about a graphics pipeline
*/
struct PipelineInfo {
BlendingState blending;
@@ -165,7 +165,7 @@ struct Shader : public Common::AsyncHandle {
class GraphicsPipeline : public Common::AsyncHandle {
public:
explicit GraphicsPipeline(const Instance& instance, RenderpassCache& renderpass_cache,
explicit GraphicsPipeline(const Instance& instance, RenderManager& render_manager,
const PipelineInfo& info, vk::PipelineCache pipeline_cache,
vk::PipelineLayout layout, std::array<Shader*, 3> stages,
Common::ThreadWorker* worker);
@@ -181,7 +181,7 @@ public:
private:
const Instance& instance;
RenderpassCache& renderpass_cache;
RenderManager& render_manager;
Common::ThreadWorker* worker;
vk::UniquePipeline pipeline;

View File

@@ -4,6 +4,7 @@
#include <span>
#include <boost/container/static_vector.hpp>
#include <fmt/format.h>
#include "common/assert.h"
#include "common/settings.h"
@@ -153,6 +154,12 @@ Instance::Instance(Core::TelemetrySession& telemetry, Frontend::EmuWindow& windo
physical_device = physical_devices[physical_device_index];
available_extensions = GetSupportedExtensions(physical_device);
properties = physical_device.getProperties();
if (properties.apiVersion < TargetVulkanApiVersion) {
throw std::runtime_error(fmt::format(
"Vulkan {}.{} is required, but only {}.{} is supported by device!",
VK_VERSION_MAJOR(TargetVulkanApiVersion), VK_VERSION_MINOR(TargetVulkanApiVersion),
VK_VERSION_MAJOR(properties.apiVersion), VK_VERSION_MINOR(properties.apiVersion)));
}
CollectTelemetryParameters(telemetry);
CreateDevice();
@@ -629,7 +636,7 @@ void Instance::CreateAllocator() {
.device = *device,
.pVulkanFunctions = &functions,
.instance = *instance,
.vulkanApiVersion = properties.apiVersion,
.vulkanApiVersion = TargetVulkanApiVersion,
};
const VkResult result = vmaCreateAllocator(&allocator_info, &allocator);
@@ -670,7 +677,7 @@ void Instance::CollectToolingInfo() {
if (!tooling_info) {
return;
}
const auto tools = physical_device.getToolProperties();
const auto tools = physical_device.getToolPropertiesEXT();
for (const vk::PhysicalDeviceToolProperties& tool : tools) {
const std::string_view name = tool.name;
LOG_INFO(Render_Vulkan, "Attached debugging tool: {}", name);

View File

@@ -5,7 +5,6 @@
#include <mutex>
#include "video_core/renderer_vulkan/vk_instance.h"
#include "video_core/renderer_vulkan/vk_master_semaphore.h"
#include "video_core/renderer_vulkan/vk_scheduler.h"
namespace Vulkan {
@@ -99,8 +98,7 @@ void MasterSemaphoreTimeline::SubmitWork(vk::CommandBuffer cmdbuf, vk::Semaphore
try {
instance.GetGraphicsQueue().submit(submit_info);
} catch (vk::DeviceLostError& err) {
LOG_CRITICAL(Render_Vulkan, "Device lost during submit: {}", err.what());
UNREACHABLE();
UNREACHABLE_MSG("Device lost during submit: {}", err.what());
}
}
@@ -109,23 +107,21 @@ constexpr u64 FENCE_RESERVE = 8;
MasterSemaphoreFence::MasterSemaphoreFence(const Instance& instance_) : instance{instance_} {
const vk::Device device{instance.GetDevice()};
for (u64 i = 0; i < FENCE_RESERVE; i++) {
free_queue.push(device.createFenceUnique({}));
free_queue.push_back(device.createFence({}));
}
wait_thread = std::jthread([this](std::stop_token token) { WaitThread(token); });
}
MasterSemaphoreFence::~MasterSemaphoreFence() = default;
MasterSemaphoreFence::~MasterSemaphoreFence() {
std::ranges::for_each(free_queue,
[this](auto fence) { instance.GetDevice().destroyFence(fence); });
}
void MasterSemaphoreFence::Refresh() {}
void MasterSemaphoreFence::Wait(u64 tick) {
while (true) {
u64 current_value = gpu_tick.load(std::memory_order_relaxed);
if (current_value >= tick) {
return;
}
gpu_tick.wait(current_value);
}
std::unique_lock lk{free_mutex};
free_cv.wait(lk, [&] { return gpu_tick.load(std::memory_order_relaxed) >= tick; });
}
void MasterSemaphoreFence::SubmitWork(vk::CommandBuffer cmdbuf, vk::Semaphore wait,
@@ -149,59 +145,56 @@ void MasterSemaphoreFence::SubmitWork(vk::CommandBuffer cmdbuf, vk::Semaphore wa
.pSignalSemaphores = &signal,
};
vk::UniqueFence fence{GetFreeFence()};
const vk::Fence fence = GetFreeFence();
try {
instance.GetGraphicsQueue().submit(submit_info, *fence);
instance.GetGraphicsQueue().submit(submit_info, fence);
} catch (vk::DeviceLostError& err) {
LOG_CRITICAL(Render_Vulkan, "Device lost during submit: {}", err.what());
UNREACHABLE();
UNREACHABLE_MSG("Device lost during submit: {}", err.what());
}
std::scoped_lock lock{wait_mutex};
wait_queue.push({
.handle = std::move(fence),
.signal_value = signal_value,
});
wait_queue.emplace(fence, signal_value);
wait_cv.notify_one();
}
void MasterSemaphoreFence::WaitThread(std::stop_token token) {
const vk::Device device{instance.GetDevice()};
while (!token.stop_requested()) {
Fence fence;
vk::Fence fence;
u64 signal_value;
{
std::unique_lock lock{wait_mutex};
Common::CondvarWait(wait_cv, lock, token, [this] { return !wait_queue.empty(); });
if (token.stop_requested()) {
return;
}
fence = std::move(wait_queue.front());
std::tie(fence, signal_value) = wait_queue.front();
wait_queue.pop();
}
const vk::Result result = device.waitForFences(*fence.handle, true, WAIT_TIMEOUT);
const vk::Result result = device.waitForFences(fence, true, WAIT_TIMEOUT);
if (result != vk::Result::eSuccess) {
LOG_CRITICAL(Render_Vulkan, "Fence wait failed with error {}", vk::to_string(result));
UNREACHABLE();
UNREACHABLE_MSG("Fence wait failed with error {}", vk::to_string(result));
}
device.resetFences(*fence.handle);
gpu_tick.store(fence.signal_value);
gpu_tick.notify_all();
device.resetFences(fence);
gpu_tick.store(signal_value);
std::scoped_lock lock{free_mutex};
free_queue.push(std::move(fence.handle));
free_queue.push_back(fence);
free_cv.notify_all();
}
}
vk::UniqueFence MasterSemaphoreFence::GetFreeFence() {
vk::Fence MasterSemaphoreFence::GetFreeFence() {
std::scoped_lock lock{free_mutex};
if (free_queue.empty()) {
return instance.GetDevice().createFenceUnique({});
return instance.GetDevice().createFence({});
}
vk::UniqueFence fence{std::move(free_queue.front())};
free_queue.pop();
const vk::Fence fence = free_queue.front();
free_queue.pop_front();
return fence;
}

View File

@@ -72,6 +72,8 @@ private:
};
class MasterSemaphoreFence : public MasterSemaphore {
using Waitable = std::pair<vk::Fence, u64>;
public:
explicit MasterSemaphoreFence(const Instance& instance);
~MasterSemaphoreFence() override;
@@ -86,20 +88,15 @@ public:
private:
void WaitThread(std::stop_token token);
vk::UniqueFence GetFreeFence();
vk::Fence GetFreeFence();
private:
const Instance& instance;
struct Fence {
vk::UniqueFence handle;
u64 signal_value;
};
std::queue<vk::UniqueFence> free_queue;
std::queue<Fence> wait_queue;
std::deque<vk::Fence> free_queue;
std::queue<Waitable> wait_queue;
std::mutex free_mutex;
std::mutex wait_mutex;
std::condition_variable free_cv;
std::condition_variable_any wait_cv;
std::jthread wait_thread;
};

View File

@@ -11,9 +11,10 @@
#include "common/scope_exit.h"
#include "common/settings.h"
#include "video_core/renderer_vulkan/pica_to_vk.h"
#include "video_core/renderer_vulkan/vk_descriptor_update_queue.h"
#include "video_core/renderer_vulkan/vk_instance.h"
#include "video_core/renderer_vulkan/vk_pipeline_cache.h"
#include "video_core/renderer_vulkan/vk_renderpass_cache.h"
#include "video_core/renderer_vulkan/vk_render_manager.h"
#include "video_core/renderer_vulkan/vk_scheduler.h"
#include "video_core/renderer_vulkan/vk_shader_util.h"
#include "video_core/shader/generator/glsl_fs_shader_gen.h"
@@ -62,34 +63,34 @@ constexpr std::array<vk::DescriptorSetLayoutBinding, 6> BUFFER_BINDINGS = {{
{5, vk::DescriptorType::eUniformTexelBuffer, 1, vk::ShaderStageFlagBits::eFragment},
}};
template <u32 NumTex0>
constexpr std::array<vk::DescriptorSetLayoutBinding, 3> TEXTURE_BINDINGS = {{
{0, vk::DescriptorType::eCombinedImageSampler, 1, vk::ShaderStageFlagBits::eFragment},
{1, vk::DescriptorType::eCombinedImageSampler, 1, vk::ShaderStageFlagBits::eFragment},
{2, vk::DescriptorType::eCombinedImageSampler, 1, vk::ShaderStageFlagBits::eFragment},
{0, vk::DescriptorType::eCombinedImageSampler, NumTex0,
vk::ShaderStageFlagBits::eFragment}, // tex0
{1, vk::DescriptorType::eCombinedImageSampler, 1, vk::ShaderStageFlagBits::eFragment}, // tex1
{2, vk::DescriptorType::eCombinedImageSampler, 1, vk::ShaderStageFlagBits::eFragment}, // tex2
}};
// TODO: Use descriptor array for shadow cube
constexpr std::array<vk::DescriptorSetLayoutBinding, 7> SHADOW_BINDINGS = {{
{0, vk::DescriptorType::eStorageImage, 1, vk::ShaderStageFlagBits::eFragment},
{1, vk::DescriptorType::eStorageImage, 1, vk::ShaderStageFlagBits::eFragment},
{2, vk::DescriptorType::eStorageImage, 1, vk::ShaderStageFlagBits::eFragment},
{3, vk::DescriptorType::eStorageImage, 1, vk::ShaderStageFlagBits::eFragment},
{4, vk::DescriptorType::eStorageImage, 1, vk::ShaderStageFlagBits::eFragment},
{5, vk::DescriptorType::eStorageImage, 1, vk::ShaderStageFlagBits::eFragment},
{6, vk::DescriptorType::eStorageImage, 1, vk::ShaderStageFlagBits::eFragment},
constexpr std::array<vk::DescriptorSetLayoutBinding, 2> UTILITY_BINDINGS = {{
{0, vk::DescriptorType::eStorageImage, 1, vk::ShaderStageFlagBits::eFragment}, // shadow_buffer
{1, vk::DescriptorType::eCombinedImageSampler, 1,
vk::ShaderStageFlagBits::eFragment}, // tex_normal
}};
PipelineCache::PipelineCache(const Instance& instance_, Scheduler& scheduler_,
RenderpassCache& renderpass_cache_, DescriptorPool& pool_)
: instance{instance_}, scheduler{scheduler_}, renderpass_cache{renderpass_cache_}, pool{pool_},
num_worker_threads{std::max(std::thread::hardware_concurrency(), 2U)},
RenderManager& render_manager_, DescriptorUpdateQueue& update_queue_)
: instance{instance_}, scheduler{scheduler_}, render_manager{render_manager_},
update_queue{update_queue_},
num_worker_threads{std::max(std::thread::hardware_concurrency(), 2U) >> 1},
workers{num_worker_threads, "Pipeline workers"},
descriptor_set_providers{DescriptorSetProvider{instance, pool, BUFFER_BINDINGS},
DescriptorSetProvider{instance, pool, TEXTURE_BINDINGS},
DescriptorSetProvider{instance, pool, SHADOW_BINDINGS}},
descriptor_heaps{
DescriptorHeap{instance, scheduler.GetMasterSemaphore(), BUFFER_BINDINGS, 32},
DescriptorHeap{instance, scheduler.GetMasterSemaphore(), TEXTURE_BINDINGS<1>},
DescriptorHeap{instance, scheduler.GetMasterSemaphore(), UTILITY_BINDINGS, 32}},
trivial_vertex_shader{
instance, vk::ShaderStageFlagBits::eVertex,
GLSL::GenerateTrivialVertexShader(instance.IsShaderClipDistanceSupported(), true)} {
scheduler.RegisterOnDispatch([this] { update_queue.Flush(); });
profile = Pica::Shader::Profile{
.has_separable_shaders = true,
.has_clip_planes = instance.IsShaderClipDistanceSupported(),
@@ -106,13 +107,13 @@ PipelineCache::PipelineCache(const Instance& instance_, Scheduler& scheduler_,
}
void PipelineCache::BuildLayout() {
std::array<vk::DescriptorSetLayout, NUM_RASTERIZER_SETS> descriptor_set_layouts;
std::transform(descriptor_set_providers.begin(), descriptor_set_providers.end(),
descriptor_set_layouts.begin(),
[](const auto& provider) { return provider.Layout(); });
std::array<vk::DescriptorSetLayout, NumRasterizerSets> descriptor_set_layouts;
descriptor_set_layouts[0] = descriptor_heaps[0].Layout();
descriptor_set_layouts[1] = descriptor_heaps[1].Layout();
descriptor_set_layouts[2] = descriptor_heaps[2].Layout();
const vk::PipelineLayoutCreateInfo layout_info = {
.setLayoutCount = NUM_RASTERIZER_SETS,
.setLayoutCount = NumRasterizerSets,
.pSetLayouts = descriptor_set_layouts.data(),
.pushConstantRangeCount = 0,
.pPushConstantRanges = nullptr,
@@ -205,7 +206,7 @@ bool PipelineCache::BindPipeline(const PipelineInfo& info, bool wait_built) {
auto [it, new_pipeline] = graphics_pipelines.try_emplace(pipeline_hash);
if (new_pipeline) {
it.value() =
std::make_unique<GraphicsPipeline>(instance, renderpass_cache, info, *pipeline_cache,
std::make_unique<GraphicsPipeline>(instance, render_manager, info, *pipeline_cache,
*pipeline_layout, current_shaders, &workers);
}
@@ -214,55 +215,11 @@ bool PipelineCache::BindPipeline(const PipelineInfo& info, bool wait_built) {
return false;
}
u32 new_descriptors_start = 0;
std::span<vk::DescriptorSet> new_descriptors_span{};
std::span<u32> new_offsets_span{};
// Ensure all the descriptor sets are set at least once at the beginning.
if (scheduler.IsStateDirty(StateFlags::DescriptorSets)) {
set_dirty.set();
}
if (set_dirty.any()) {
for (u32 i = 0; i < NUM_RASTERIZER_SETS; i++) {
if (!set_dirty.test(i)) {
continue;
}
bound_descriptor_sets[i] = descriptor_set_providers[i].Acquire(update_data[i]);
}
new_descriptors_span = bound_descriptor_sets;
// Only send new offsets if the buffer descriptor-set changed.
if (set_dirty.test(0)) {
new_offsets_span = offsets;
}
// Try to compact the number of updated descriptor-set slots to the ones that have actually
// changed
if (!set_dirty.all()) {
const u64 dirty_mask = set_dirty.to_ulong();
new_descriptors_start = static_cast<u32>(std::countr_zero(dirty_mask));
const u32 new_descriptors_end = 64u - static_cast<u32>(std::countl_zero(dirty_mask));
const u32 new_descriptors_size = new_descriptors_end - new_descriptors_start;
new_descriptors_span =
new_descriptors_span.subspan(new_descriptors_start, new_descriptors_size);
}
set_dirty.reset();
}
boost::container::static_vector<vk::DescriptorSet, NUM_RASTERIZER_SETS> new_descriptors(
new_descriptors_span.begin(), new_descriptors_span.end());
boost::container::static_vector<u32, NUM_DYNAMIC_OFFSETS> new_offsets(new_offsets_span.begin(),
new_offsets_span.end());
const bool is_dirty = scheduler.IsStateDirty(StateFlags::Pipeline);
const bool pipeline_dirty = (current_pipeline != pipeline) || is_dirty;
scheduler.Record([this, is_dirty, pipeline_dirty, pipeline,
current_dynamic = current_info.dynamic, dynamic = info.dynamic,
new_descriptors_start, descriptor_sets = std::move(new_descriptors),
offsets = std::move(new_offsets),
descriptor_sets = bound_descriptor_sets, offsets = offsets,
current_rasterization = current_info.rasterization,
current_depth_stencil = current_info.depth_stencil,
rasterization = info.rasterization,
@@ -364,10 +321,8 @@ bool PipelineCache::BindPipeline(const PipelineInfo& info, bool wait_built) {
cmdbuf.bindPipeline(vk::PipelineBindPoint::eGraphics, pipeline->Handle());
}
if (descriptor_sets.size()) {
cmdbuf.bindDescriptorSets(vk::PipelineBindPoint::eGraphics, *pipeline_layout,
new_descriptors_start, descriptor_sets, offsets);
}
cmdbuf.bindDescriptorSets(vk::PipelineBindPoint::eGraphics, *pipeline_layout, 0,
descriptor_sets, offsets);
});
current_info = info;
@@ -385,7 +340,6 @@ bool PipelineCache::UseProgrammableVertexShader(const Pica::RegsInternal& regs,
// We also don't need the geometry shader if we have the barycentric extension.
const bool use_geometry_shader = instance.UseGeometryShaders() && !regs.lighting.disable &&
!instance.IsFragmentShaderBarycentricSupported();
PicaVSConfig config{regs, setup, instance.IsShaderClipDistanceSupported(), use_geometry_shader};
for (u32 i = 0; i < layout.attribute_count; i++) {
@@ -402,7 +356,7 @@ bool PipelineCache::UseProgrammableVertexShader(const Pica::RegsInternal& regs,
}
}
auto [it, new_config] = programmable_vertex_map.try_emplace(config);
const auto [it, new_config] = programmable_vertex_map.try_emplace(config);
if (new_config) {
auto program = GLSL::GenerateVertexShader(setup, config, true);
if (program.empty()) {
@@ -497,59 +451,6 @@ void PipelineCache::UseFragmentShader(const Pica::RegsInternal& regs,
shader_hashes[ProgramType::FS] = fs_config.Hash();
}
void PipelineCache::BindTexture(u32 binding, vk::ImageView image_view, vk::Sampler sampler) {
auto& info = update_data[1][binding].image_info;
if (info.imageView == image_view && info.sampler == sampler) {
return;
}
set_dirty[1] = true;
info = vk::DescriptorImageInfo{
.sampler = sampler,
.imageView = image_view,
.imageLayout = vk::ImageLayout::eGeneral,
};
}
void PipelineCache::BindStorageImage(u32 binding, vk::ImageView image_view) {
auto& info = update_data[2][binding].image_info;
if (info.imageView == image_view) {
return;
}
set_dirty[2] = true;
info = vk::DescriptorImageInfo{
.imageView = image_view,
.imageLayout = vk::ImageLayout::eGeneral,
};
}
void PipelineCache::BindBuffer(u32 binding, vk::Buffer buffer, u32 offset, u32 size) {
auto& info = update_data[0][binding].buffer_info;
if (info.buffer == buffer && info.offset == offset && info.range == size) {
return;
}
set_dirty[0] = true;
info = vk::DescriptorBufferInfo{
.buffer = buffer,
.offset = offset,
.range = size,
};
}
void PipelineCache::BindTexelBuffer(u32 binding, vk::BufferView buffer_view) {
auto& view = update_data[0][binding].buffer_view;
if (view != buffer_view) {
set_dirty[0] = true;
view = buffer_view;
}
}
void PipelineCache::SetBufferOffset(u32 binding, std::size_t offset) {
if (offsets[binding] != static_cast<u32>(offset)) {
offsets[binding] = static_cast<u32>(offset);
set_dirty[0] = true;
}
}
bool PipelineCache::IsCacheValid(std::span<const u8> data) const {
if (data.size() < sizeof(vk::PipelineCacheHeaderVersionOne)) {
LOG_ERROR(Render_Vulkan, "Pipeline cache failed validation: Invalid header");

View File

@@ -7,8 +7,8 @@
#include <bitset>
#include <tsl/robin_map.h>
#include "video_core/renderer_vulkan/vk_descriptor_pool.h"
#include "video_core/renderer_vulkan/vk_graphics_pipeline.h"
#include "video_core/renderer_vulkan/vk_resource_pool.h"
#include "video_core/shader/generator/pica_fs_config.h"
#include "video_core/shader/generator/profile.h"
#include "video_core/shader/generator/shader_gen.h"
@@ -22,23 +22,39 @@ namespace Vulkan {
class Instance;
class Scheduler;
class RenderpassCache;
class DescriptorPool;
class RenderManager;
class DescriptorUpdateQueue;
constexpr u32 NUM_RASTERIZER_SETS = 3;
constexpr u32 NUM_DYNAMIC_OFFSETS = 3;
enum class DescriptorHeapType : u32 {
Buffer,
Texture,
Utility,
};
/**
* Stores a collection of rasterizer pipelines used during rendering.
*/
class PipelineCache {
static constexpr u32 NumRasterizerSets = 3;
static constexpr u32 NumDescriptorHeaps = 3;
static constexpr u32 NumDynamicOffsets = 3;
public:
explicit PipelineCache(const Instance& instance, Scheduler& scheduler,
RenderpassCache& renderpass_cache, DescriptorPool& pool);
RenderManager& render_manager, DescriptorUpdateQueue& update_queue);
~PipelineCache();
[[nodiscard]] DescriptorSetProvider& TextureProvider() noexcept {
return descriptor_set_providers[1];
/// Acquires and binds a free descriptor set from the appropriate heap.
vk::DescriptorSet Acquire(DescriptorHeapType type) {
const u32 index = static_cast<u32>(type);
const auto descriptor_set = descriptor_heaps[index].Commit();
bound_descriptor_sets[index] = descriptor_set;
return descriptor_set;
}
/// Sets the dynamic offset for the uniform buffer at binding
void UpdateRange(u8 binding, u32 offset) {
offsets[binding] = offset;
}
/// Loads the pipeline cache stored to disk
@@ -66,21 +82,6 @@ public:
/// Binds a fragment shader generated from PICA state
void UseFragmentShader(const Pica::RegsInternal& regs, const Pica::Shader::UserConfig& user);
/// Binds a texture to the specified binding
void BindTexture(u32 binding, vk::ImageView image_view, vk::Sampler sampler);
/// Binds a storage image to the specified binding
void BindStorageImage(u32 binding, vk::ImageView image_view);
/// Binds a buffer to the specified binding
void BindBuffer(u32 binding, vk::Buffer buffer, u32 offset, u32 size);
/// Binds a buffer to the specified binding
void BindTexelBuffer(u32 binding, vk::BufferView buffer_view);
/// Sets the dynamic offset for the uniform buffer at binding
void SetBufferOffset(u32 binding, std::size_t offset);
private:
/// Builds the rasterizer pipeline layout
void BuildLayout();
@@ -97,8 +98,8 @@ private:
private:
const Instance& instance;
Scheduler& scheduler;
RenderpassCache& renderpass_cache;
DescriptorPool& pool;
RenderManager& render_manager;
DescriptorUpdateQueue& update_queue;
Pica::Shader::Profile profile{};
vk::UniquePipelineCache pipeline_cache;
@@ -110,11 +111,9 @@ private:
tsl::robin_map<u64, std::unique_ptr<GraphicsPipeline>, Common::IdentityHash<u64>>
graphics_pipelines;
std::array<DescriptorSetProvider, NUM_RASTERIZER_SETS> descriptor_set_providers;
std::array<DescriptorSetData, NUM_RASTERIZER_SETS> update_data{};
std::array<vk::DescriptorSet, NUM_RASTERIZER_SETS> bound_descriptor_sets{};
std::array<u32, NUM_DYNAMIC_OFFSETS> offsets{};
std::bitset<NUM_RASTERIZER_SETS> set_dirty{};
std::array<DescriptorHeap, NumDescriptorHeaps> descriptor_heaps;
std::array<vk::DescriptorSet, NumRasterizerSets> bound_descriptor_sets{};
std::array<u32, NumDynamicOffsets> offsets{};
std::array<u64, MAX_SHADER_STAGES> shader_hashes;
std::array<Shader*, MAX_SHADER_STAGES> current_shaders;

View File

@@ -17,6 +17,7 @@
#include <memory>
#include <vector>
#include <boost/container/static_vector.hpp>
#include <fmt/format.h>
#include "common/assert.h"
#include "common/logging/log.h"
@@ -31,8 +32,9 @@ static VKAPI_ATTR VkBool32 VKAPI_CALL DebugUtilsCallback(
VkDebugUtilsMessageSeverityFlagBitsEXT severity, VkDebugUtilsMessageTypeFlagsEXT type,
const VkDebugUtilsMessengerCallbackDataEXT* callback_data, void* user_data) {
switch (callback_data->messageIdNumber) {
switch (static_cast<u32>(callback_data->messageIdNumber)) {
case 0x609a13b: // Vertex attribute at location not consumed by shader
case 0xc81ad50e:
return VK_FALSE;
default:
break;
@@ -290,13 +292,14 @@ vk::UniqueInstance CreateInstance(const Common::DynamicLibrary& library,
}
VULKAN_HPP_DEFAULT_DISPATCHER.init(vkGetInstanceProcAddr);
if (!VULKAN_HPP_DEFAULT_DISPATCHER.vkEnumerateInstanceVersion) {
throw std::runtime_error("Vulkan 1.0 is not supported, 1.1 is required!");
}
const u32 available_version = vk::enumerateInstanceVersion();
if (available_version < VK_API_VERSION_1_1) {
throw std::runtime_error("Vulkan 1.0 is not supported, 1.1 is required!");
const u32 available_version = VULKAN_HPP_DEFAULT_DISPATCHER.vkEnumerateInstanceVersion
? vk::enumerateInstanceVersion()
: VK_API_VERSION_1_0;
if (available_version < TargetVulkanApiVersion) {
throw std::runtime_error(fmt::format(
"Vulkan {}.{} is required, but only {}.{} is supported by instance!",
VK_VERSION_MAJOR(TargetVulkanApiVersion), VK_VERSION_MINOR(TargetVulkanApiVersion),
VK_VERSION_MAJOR(available_version), VK_VERSION_MINOR(available_version)));
}
const auto extensions = GetInstanceExtensions(window_type, enable_validation);
@@ -306,7 +309,7 @@ vk::UniqueInstance CreateInstance(const Common::DynamicLibrary& library,
.applicationVersion = VK_MAKE_VERSION(1, 0, 0),
.pEngineName = "Citra Vulkan",
.engineVersion = VK_MAKE_VERSION(1, 0, 0),
.apiVersion = VK_API_VERSION_1_3,
.apiVersion = TargetVulkanApiVersion,
};
boost::container::static_vector<const char*, 2> layers;

View File

@@ -19,6 +19,8 @@ enum class WindowSystemType : u8;
namespace Vulkan {
constexpr u32 TargetVulkanApiVersion = VK_API_VERSION_1_1;
using DebugCallback =
std::variant<vk::UniqueDebugUtilsMessengerEXT, vk::UniqueDebugReportCallbackEXT>;

View File

@@ -138,11 +138,11 @@ PresentWindow::PresentWindow(Frontend::EmuWindow& emu_window_, const Instance& i
if (instance.HasDebuggingToolAttached()) {
for (u32 i = 0; i < num_images; ++i) {
Vulkan::SetObjectName(device, swap_chain[i].cmdbuf, "Swapchain Command Buffer {}", i);
Vulkan::SetObjectName(device, swap_chain[i].render_ready,
"Swapchain Semaphore: render_ready {}", i);
Vulkan::SetObjectName(device, swap_chain[i].present_done,
"Swapchain Fence: present_done {}", i);
SetObjectName(device, swap_chain[i].cmdbuf, "Swapchain Command Buffer {}", i);
SetObjectName(device, swap_chain[i].render_ready,
"Swapchain Semaphore: render_ready {}", i);
SetObjectName(device, swap_chain[i].present_done, "Swapchain Fence: present_done {}",
i);
}
}

View File

@@ -20,7 +20,7 @@ namespace Vulkan {
class Instance;
class Swapchain;
class Scheduler;
class RenderpassCache;
class RenderManager;
struct Frame {
u32 width;

View File

@@ -58,13 +58,15 @@ RasterizerVulkan::RasterizerVulkan(Memory::MemorySystem& memory, Pica::PicaCore&
VideoCore::CustomTexManager& custom_tex_manager,
VideoCore::RendererBase& renderer,
Frontend::EmuWindow& emu_window, const Instance& instance,
Scheduler& scheduler, DescriptorPool& pool,
RenderpassCache& renderpass_cache, u32 image_count)
Scheduler& scheduler, RenderManager& render_manager,
DescriptorUpdateQueue& update_queue_, u32 image_count)
: RasterizerAccelerated{memory, pica}, instance{instance}, scheduler{scheduler},
renderpass_cache{renderpass_cache}, pipeline_cache{instance, scheduler, renderpass_cache,
pool},
runtime{instance, scheduler, renderpass_cache, pool, pipeline_cache.TextureProvider(),
image_count},
render_manager{render_manager}, update_queue{update_queue_},
pipeline_cache{instance, scheduler, render_manager, update_queue}, runtime{instance,
scheduler,
render_manager,
update_queue,
image_count},
res_cache{memory, custom_tex_manager, runtime, regs, renderer},
stream_buffer{instance, scheduler, BUFFER_USAGE, STREAM_BUFFER_SIZE},
uniform_buffer{instance, scheduler, vk::BufferUsageFlagBits::eUniformBuffer,
@@ -77,11 +79,12 @@ RasterizerVulkan::RasterizerVulkan(Memory::MemorySystem& memory, Pica::PicaCore&
vertex_buffers.fill(stream_buffer.Handle());
// Query uniform buffer alignment.
uniform_buffer_alignment = instance.UniformMinAlignment();
uniform_size_aligned_vs_pica =
Common::AlignUp(sizeof(VSPicaUniformData), uniform_buffer_alignment);
uniform_size_aligned_vs = Common::AlignUp(sizeof(VSUniformData), uniform_buffer_alignment);
uniform_size_aligned_fs = Common::AlignUp(sizeof(FSUniformData), uniform_buffer_alignment);
Common::AlignUp<u32>(sizeof(VSPicaUniformData), uniform_buffer_alignment);
uniform_size_aligned_vs = Common::AlignUp<u32>(sizeof(VSUniformData), uniform_buffer_alignment);
uniform_size_aligned_fs = Common::AlignUp<u32>(sizeof(FSUniformData), uniform_buffer_alignment);
// Define vertex layout for software shaders
MakeSoftwareVertexLayout();
@@ -107,24 +110,32 @@ RasterizerVulkan::RasterizerVulkan(Memory::MemorySystem& memory, Pica::PicaCore&
.range = VK_WHOLE_SIZE,
});
// Since we don't have access to VK_EXT_descriptor_indexing we need to intiallize
// all descriptor sets even the ones we don't use.
pipeline_cache.BindBuffer(0, uniform_buffer.Handle(), 0, sizeof(VSPicaUniformData));
pipeline_cache.BindBuffer(1, uniform_buffer.Handle(), 0, sizeof(VSUniformData));
pipeline_cache.BindBuffer(2, uniform_buffer.Handle(), 0, sizeof(FSUniformData));
pipeline_cache.BindTexelBuffer(3, *texture_lf_view);
pipeline_cache.BindTexelBuffer(4, *texture_rg_view);
pipeline_cache.BindTexelBuffer(5, *texture_rgba_view);
scheduler.RegisterOnSubmit([&render_manager] { render_manager.EndRendering(); });
// Prepare the static buffer descriptor set.
const auto buffer_set = pipeline_cache.Acquire(DescriptorHeapType::Buffer);
update_queue.AddBuffer(buffer_set, 0, uniform_buffer.Handle(), 0, sizeof(VSPicaUniformData));
update_queue.AddBuffer(buffer_set, 1, uniform_buffer.Handle(), 0, sizeof(VSUniformData));
update_queue.AddBuffer(buffer_set, 2, uniform_buffer.Handle(), 0, sizeof(FSUniformData));
update_queue.AddTexelBuffer(buffer_set, 3, *texture_lf_view);
update_queue.AddTexelBuffer(buffer_set, 4, *texture_rg_view);
update_queue.AddTexelBuffer(buffer_set, 5, *texture_rgba_view);
const auto texture_set = pipeline_cache.Acquire(DescriptorHeapType::Texture);
Surface& null_surface = res_cache.GetSurface(VideoCore::NULL_SURFACE_ID);
Sampler& null_sampler = res_cache.GetSampler(VideoCore::NULL_SAMPLER_ID);
// Prepare texture and utility descriptor sets.
for (u32 i = 0; i < 3; i++) {
pipeline_cache.BindTexture(i, null_surface.ImageView(), null_sampler.Handle());
update_queue.AddImageSampler(texture_set, i, 0, null_surface.ImageView(),
null_sampler.Handle());
}
for (u32 i = 0; i < 7; i++) {
pipeline_cache.BindStorageImage(i, null_surface.StorageView());
}
const auto utility_set = pipeline_cache.Acquire(DescriptorHeapType::Utility);
update_queue.AddStorageImage(utility_set, 0, null_surface.StorageView());
update_queue.AddImageSampler(utility_set, 1, 0, null_surface.ImageView(),
null_sampler.Handle());
update_queue.Flush();
SyncEntireState();
}
@@ -477,13 +488,6 @@ bool RasterizerVulkan::Draw(bool accelerate, bool is_indexed) {
pipeline_info.attachments.color = framebuffer->Format(SurfaceType::Color);
pipeline_info.attachments.depth = framebuffer->Format(SurfaceType::Depth);
if (shadow_rendering) {
pipeline_cache.BindStorageImage(6, framebuffer->ImageView(SurfaceType::Color));
} else {
Surface& null_surface = res_cache.GetSurface(VideoCore::NULL_SURFACE_ID);
pipeline_cache.BindStorageImage(6, null_surface.StorageView());
}
// Update scissor uniforms
const auto [scissor_x1, scissor_y2, scissor_x2, scissor_y1] = fb_helper.Scissor();
if (fs_uniform_block_data.data.scissor_x1 != scissor_x1 ||
@@ -500,6 +504,7 @@ bool RasterizerVulkan::Draw(bool accelerate, bool is_indexed) {
// Sync and bind the texture surfaces
SyncTextureUnits(framebuffer);
SyncUtilityTextures(framebuffer);
// Sync and bind the shader
if (shader_dirty) {
@@ -514,7 +519,7 @@ bool RasterizerVulkan::Draw(bool accelerate, bool is_indexed) {
// Begin rendering
const auto draw_rect = fb_helper.DrawRect();
renderpass_cache.BeginRendering(framebuffer, draw_rect);
render_manager.BeginRendering(framebuffer, draw_rect);
// Configure viewport and scissor
const auto viewport = fb_helper.Viewport();
@@ -533,8 +538,8 @@ bool RasterizerVulkan::Draw(bool accelerate, bool is_indexed) {
} else {
pipeline_cache.BindPipeline(pipeline_info, true);
const u64 vertex_size = vertex_batch.size() * sizeof(HardwareVertex);
const u32 vertex_count = static_cast<u32>(vertex_batch.size());
const u32 vertex_size = vertex_count * sizeof(HardwareVertex);
const auto [buffer, offset, _] = stream_buffer.Map(vertex_size, sizeof(HardwareVertex));
std::memcpy(buffer, vertex_batch.data(), vertex_size);
@@ -554,6 +559,11 @@ void RasterizerVulkan::SyncTextureUnits(const Framebuffer* framebuffer) {
using TextureType = Pica::TexturingRegs::TextureConfig::TextureType;
const auto pica_textures = regs.texturing.GetTextures();
const bool use_cube_heap =
pica_textures[0].enabled && pica_textures[0].config.type == TextureType::ShadowCube;
const auto texture_set = pipeline_cache.Acquire(use_cube_heap ? DescriptorHeapType::Texture
: DescriptorHeapType::Texture);
for (u32 texture_index = 0; texture_index < pica_textures.size(); ++texture_index) {
const auto& texture = pica_textures[texture_index];
@@ -561,8 +571,8 @@ void RasterizerVulkan::SyncTextureUnits(const Framebuffer* framebuffer) {
if (!texture.enabled) {
const Surface& null_surface = res_cache.GetSurface(VideoCore::NULL_SURFACE_ID);
const Sampler& null_sampler = res_cache.GetSampler(VideoCore::NULL_SAMPLER_ID);
pipeline_cache.BindTexture(texture_index, null_surface.ImageView(),
null_sampler.Handle());
update_queue.AddImageSampler(texture_set, texture_index, 0, null_surface.ImageView(),
null_sampler.Handle());
continue;
}
@@ -571,20 +581,21 @@ void RasterizerVulkan::SyncTextureUnits(const Framebuffer* framebuffer) {
switch (texture.config.type.Value()) {
case TextureType::Shadow2D: {
Surface& surface = res_cache.GetTextureSurface(texture);
Sampler& sampler = res_cache.GetSampler(texture.config);
surface.flags |= VideoCore::SurfaceFlagBits::ShadowMap;
pipeline_cache.BindStorageImage(0, surface.StorageView());
update_queue.AddImageSampler(texture_set, texture_index, 0, surface.StorageView(),
sampler.Handle());
continue;
}
case TextureType::ShadowCube: {
BindShadowCube(texture);
BindShadowCube(texture, texture_set);
continue;
}
case TextureType::TextureCube: {
BindTextureCube(texture);
BindTextureCube(texture, texture_set);
continue;
}
default:
UnbindSpecial();
break;
}
}
@@ -592,13 +603,26 @@ void RasterizerVulkan::SyncTextureUnits(const Framebuffer* framebuffer) {
// Bind the texture provided by the rasterizer cache
Surface& surface = res_cache.GetTextureSurface(texture);
Sampler& sampler = res_cache.GetSampler(texture.config);
if (!IsFeedbackLoop(texture_index, framebuffer, surface, sampler)) {
pipeline_cache.BindTexture(texture_index, surface.ImageView(), sampler.Handle());
}
const vk::ImageView color_view = framebuffer->ImageView(SurfaceType::Color);
const bool is_feedback_loop = color_view == surface.ImageView();
const vk::ImageView texture_view =
is_feedback_loop ? surface.CopyImageView() : surface.ImageView();
update_queue.AddImageSampler(texture_set, texture_index, 0, texture_view, sampler.Handle());
}
}
void RasterizerVulkan::BindShadowCube(const Pica::TexturingRegs::FullTextureConfig& texture) {
void RasterizerVulkan::SyncUtilityTextures(const Framebuffer* framebuffer) {
const bool shadow_rendering = regs.framebuffer.IsShadowRendering();
if (!shadow_rendering) {
return;
}
const auto utility_set = pipeline_cache.Acquire(DescriptorHeapType::Utility);
update_queue.AddStorageImage(utility_set, 0, framebuffer->ImageView(SurfaceType::Color));
}
void RasterizerVulkan::BindShadowCube(const Pica::TexturingRegs::FullTextureConfig& texture,
vk::DescriptorSet texture_set) {
using CubeFace = Pica::TexturingRegs::CubeFace;
auto info = Pica::Texture::TextureInfo::FromPicaRegister(texture.config, texture.format);
constexpr std::array faces = {
@@ -606,6 +630,8 @@ void RasterizerVulkan::BindShadowCube(const Pica::TexturingRegs::FullTextureConf
CubeFace::NegativeY, CubeFace::PositiveZ, CubeFace::NegativeZ,
};
Sampler& sampler = res_cache.GetSampler(texture.config);
for (CubeFace face : faces) {
const u32 binding = static_cast<u32>(face);
info.physical_address = regs.texturing.GetCubePhysicalAddress(face);
@@ -613,11 +639,13 @@ void RasterizerVulkan::BindShadowCube(const Pica::TexturingRegs::FullTextureConf
const VideoCore::SurfaceId surface_id = res_cache.GetTextureSurface(info);
Surface& surface = res_cache.GetSurface(surface_id);
surface.flags |= VideoCore::SurfaceFlagBits::ShadowMap;
pipeline_cache.BindStorageImage(binding, surface.StorageView());
update_queue.AddImageSampler(texture_set, 0, binding, surface.StorageView(),
sampler.Handle());
}
}
void RasterizerVulkan::BindTextureCube(const Pica::TexturingRegs::FullTextureConfig& texture) {
void RasterizerVulkan::BindTextureCube(const Pica::TexturingRegs::FullTextureConfig& texture,
vk::DescriptorSet texture_set) {
using CubeFace = Pica::TexturingRegs::CubeFace;
const VideoCore::TextureCubeConfig config = {
.px = regs.texturing.GetCubePhysicalAddress(CubeFace::PositiveX),
@@ -633,27 +661,7 @@ void RasterizerVulkan::BindTextureCube(const Pica::TexturingRegs::FullTextureCon
Surface& surface = res_cache.GetTextureCube(config);
Sampler& sampler = res_cache.GetSampler(texture.config);
pipeline_cache.BindTexture(0, surface.ImageView(), sampler.Handle());
}
bool RasterizerVulkan::IsFeedbackLoop(u32 texture_index, const Framebuffer* framebuffer,
Surface& surface, Sampler& sampler) {
const vk::ImageView color_view = framebuffer->ImageView(SurfaceType::Color);
const bool is_feedback_loop = color_view == surface.ImageView();
if (!is_feedback_loop) {
return false;
}
// Make a temporary copy of the framebuffer to sample from
pipeline_cache.BindTexture(texture_index, surface.CopyImageView(), sampler.Handle());
return true;
}
void RasterizerVulkan::UnbindSpecial() {
Surface& null_surface = res_cache.GetSurface(VideoCore::NULL_SURFACE_ID);
for (u32 i = 0; i < 6; i++) {
pipeline_cache.BindStorageImage(i, null_surface.StorageView());
}
update_queue.AddImageSampler(texture_set, 0, 0, surface.ImageView(), sampler.Handle());
}
void RasterizerVulkan::NotifyFixedFunctionPicaRegisterChanged(u32 id) {
@@ -1091,7 +1099,7 @@ void RasterizerVulkan::UploadUniforms(bool accelerate_draw) {
return;
}
const u64 uniform_size =
const u32 uniform_size =
uniform_size_aligned_vs_pica + uniform_size_aligned_vs + uniform_size_aligned_fs;
auto [uniforms, offset, invalidate] =
uniform_buffer.Map(uniform_size, uniform_buffer_alignment);
@@ -1102,18 +1110,18 @@ void RasterizerVulkan::UploadUniforms(bool accelerate_draw) {
std::memcpy(uniforms + used_bytes, &vs_uniform_block_data.data,
sizeof(vs_uniform_block_data.data));
pipeline_cache.SetBufferOffset(1, offset + used_bytes);
pipeline_cache.UpdateRange(1, offset + used_bytes);
vs_uniform_block_data.dirty = false;
used_bytes += static_cast<u32>(uniform_size_aligned_vs);
used_bytes += uniform_size_aligned_vs;
}
if (sync_fs || invalidate) {
std::memcpy(uniforms + used_bytes, &fs_uniform_block_data.data,
sizeof(fs_uniform_block_data.data));
pipeline_cache.SetBufferOffset(2, offset + used_bytes);
pipeline_cache.UpdateRange(2, offset + used_bytes);
fs_uniform_block_data.dirty = false;
used_bytes += static_cast<u32>(uniform_size_aligned_fs);
used_bytes += uniform_size_aligned_fs;
}
if (sync_vs_pica) {
@@ -1121,8 +1129,8 @@ void RasterizerVulkan::UploadUniforms(bool accelerate_draw) {
vs_uniforms.uniforms.SetFromRegs(regs.vs, pica.vs_setup);
std::memcpy(uniforms + used_bytes, &vs_uniforms, sizeof(vs_uniforms));
pipeline_cache.SetBufferOffset(0, offset + used_bytes);
used_bytes += static_cast<u32>(uniform_size_aligned_vs_pica);
pipeline_cache.UpdateRange(0, offset + used_bytes);
used_bytes += uniform_size_aligned_vs_pica;
}
uniform_buffer.Commit(used_bytes);

View File

@@ -5,8 +5,9 @@
#pragma once
#include "video_core/rasterizer_accelerated.h"
#include "video_core/renderer_vulkan/vk_descriptor_update_queue.h"
#include "video_core/renderer_vulkan/vk_pipeline_cache.h"
#include "video_core/renderer_vulkan/vk_renderpass_cache.h"
#include "video_core/renderer_vulkan/vk_render_manager.h"
#include "video_core/renderer_vulkan/vk_stream_buffer.h"
#include "video_core/renderer_vulkan/vk_texture_runtime.h"
@@ -31,16 +32,16 @@ struct ScreenInfo;
class Instance;
class Scheduler;
class RenderpassCache;
class DescriptorPool;
class RenderManager;
class RasterizerVulkan : public VideoCore::RasterizerAccelerated {
public:
explicit RasterizerVulkan(Memory::MemorySystem& memory, Pica::PicaCore& pica,
VideoCore::CustomTexManager& custom_tex_manager,
VideoCore::RendererBase& renderer, Frontend::EmuWindow& emu_window,
const Instance& instance, Scheduler& scheduler, DescriptorPool& pool,
RenderpassCache& renderpass_cache, u32 image_count);
const Instance& instance, Scheduler& scheduler,
RenderManager& render_manager, DescriptorUpdateQueue& update_queue,
u32 image_count);
~RasterizerVulkan() override;
void TickFrame();
@@ -102,18 +103,16 @@ private:
/// Syncs all enabled PICA texture units
void SyncTextureUnits(const Framebuffer* framebuffer);
/// Syncs all utility textures in the fragment shader.
void SyncUtilityTextures(const Framebuffer* framebuffer);
/// Binds the PICA shadow cube required for shadow mapping
void BindShadowCube(const Pica::TexturingRegs::FullTextureConfig& texture);
void BindShadowCube(const Pica::TexturingRegs::FullTextureConfig& texture,
vk::DescriptorSet texture_set);
/// Binds a texture cube to texture unit 0
void BindTextureCube(const Pica::TexturingRegs::FullTextureConfig& texture);
/// Makes a temporary copy of the framebuffer if a feedback loop is detected
bool IsFeedbackLoop(u32 texture_index, const Framebuffer* framebuffer, Surface& surface,
Sampler& sampler);
/// Unbinds all special texture unit 0 texture configurations
void UnbindSpecial();
void BindTextureCube(const Pica::TexturingRegs::FullTextureConfig& texture,
vk::DescriptorSet texture_set);
/// Upload the uniform blocks to the uniform buffer object
void UploadUniforms(bool accelerate_draw);
@@ -145,7 +144,8 @@ private:
private:
const Instance& instance;
Scheduler& scheduler;
RenderpassCache& renderpass_cache;
RenderManager& render_manager;
DescriptorUpdateQueue& update_queue;
PipelineCache pipeline_cache;
TextureRuntime runtime;
RasterizerCache res_cache;
@@ -164,10 +164,10 @@ private:
vk::UniqueBufferView texture_lf_view;
vk::UniqueBufferView texture_rg_view;
vk::UniqueBufferView texture_rgba_view;
u64 uniform_buffer_alignment;
u64 uniform_size_aligned_vs_pica;
u64 uniform_size_aligned_vs;
u64 uniform_size_aligned_fs;
vk::DeviceSize uniform_buffer_alignment;
u32 uniform_size_aligned_vs_pica;
u32 uniform_size_aligned_vs;
u32 uniform_size_aligned_fs;
bool async_shaders{false};
};

View File

@@ -1,4 +1,4 @@
// Copyright 2023 Citra Emulator Project
// Copyright 2024 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
@@ -6,7 +6,7 @@
#include "common/assert.h"
#include "video_core/rasterizer_cache/pixel_format.h"
#include "video_core/renderer_vulkan/vk_instance.h"
#include "video_core/renderer_vulkan/vk_renderpass_cache.h"
#include "video_core/renderer_vulkan/vk_render_manager.h"
#include "video_core/renderer_vulkan/vk_scheduler.h"
#include "video_core/renderer_vulkan/vk_texture_runtime.h"
@@ -17,13 +17,13 @@ constexpr u32 MIN_DRAWS_TO_FLUSH = 20;
using VideoCore::PixelFormat;
using VideoCore::SurfaceType;
RenderpassCache::RenderpassCache(const Instance& instance, Scheduler& scheduler)
RenderManager::RenderManager(const Instance& instance, Scheduler& scheduler)
: instance{instance}, scheduler{scheduler} {}
RenderpassCache::~RenderpassCache() = default;
RenderManager::~RenderManager() = default;
void RenderpassCache::BeginRendering(const Framebuffer* framebuffer,
Common::Rectangle<u32> draw_rect) {
void RenderManager::BeginRendering(const Framebuffer* framebuffer,
Common::Rectangle<u32> draw_rect) {
const vk::Rect2D render_area = {
.offset{
.x = static_cast<s32>(draw_rect.left),
@@ -46,7 +46,7 @@ void RenderpassCache::BeginRendering(const Framebuffer* framebuffer,
BeginRendering(new_pass);
}
void RenderpassCache::BeginRendering(const RenderPass& new_pass) {
void RenderManager::BeginRendering(const RenderPass& new_pass) {
if (pass == new_pass) [[likely]] {
num_draws++;
return;
@@ -67,12 +67,11 @@ void RenderpassCache::BeginRendering(const RenderPass& new_pass) {
pass = new_pass;
}
void RenderpassCache::EndRendering() {
void RenderManager::EndRendering() {
if (!pass.render_pass) {
return;
}
pass.render_pass = vk::RenderPass{};
scheduler.Record([images = images, aspects = aspects](vk::CommandBuffer cmdbuf) {
u32 num_barriers = 0;
vk::PipelineStageFlags pipeline_flags{};
@@ -108,6 +107,9 @@ void RenderpassCache::EndRendering() {
};
}
cmdbuf.endRenderPass();
if (num_barriers == 0) {
return;
}
cmdbuf.pipelineBarrier(pipeline_flags,
vk::PipelineStageFlagBits::eFragmentShader |
vk::PipelineStageFlagBits::eTransfer,
@@ -115,6 +117,11 @@ void RenderpassCache::EndRendering() {
num_barriers, barriers.data());
});
// Reset state.
pass.render_pass = vk::RenderPass{};
images = {};
aspects = {};
// The Mali guide recommends flushing at the end of each major renderpass
// Testing has shown this has a significant effect on rendering performance
if (num_draws > MIN_DRAWS_TO_FLUSH && instance.ShouldFlush()) {
@@ -123,8 +130,8 @@ void RenderpassCache::EndRendering() {
}
}
vk::RenderPass RenderpassCache::GetRenderpass(VideoCore::PixelFormat color,
VideoCore::PixelFormat depth, bool is_clear) {
vk::RenderPass RenderManager::GetRenderpass(VideoCore::PixelFormat color,
VideoCore::PixelFormat depth, bool is_clear) {
std::scoped_lock lock{cache_mutex};
const u32 color_index =
@@ -148,8 +155,8 @@ vk::RenderPass RenderpassCache::GetRenderpass(VideoCore::PixelFormat color,
return *renderpass;
}
vk::UniqueRenderPass RenderpassCache::CreateRenderPass(vk::Format color, vk::Format depth,
vk::AttachmentLoadOp load_op) const {
vk::UniqueRenderPass RenderManager::CreateRenderPass(vk::Format color, vk::Format depth,
vk::AttachmentLoadOp load_op) const {
u32 attachment_count = 0;
std::array<vk::AttachmentDescription, 2> attachments;

View File

@@ -1,4 +1,4 @@
// Copyright 2023 Citra Emulator Project
// Copyright 2024 Citra Emulator Project
// Licensed under GPLv2 or any later version
// Refer to the license.txt file included.
@@ -24,7 +24,7 @@ struct RenderPass {
vk::RenderPass render_pass;
vk::Rect2D render_area;
vk::ClearValue clear;
bool do_clear;
u32 do_clear;
bool operator==(const RenderPass& other) const noexcept {
return std::tie(framebuffer, render_pass, render_area, do_clear) ==
@@ -34,13 +34,13 @@ struct RenderPass {
}
};
class RenderpassCache {
class RenderManager {
static constexpr std::size_t MAX_COLOR_FORMATS = 13;
static constexpr std::size_t MAX_DEPTH_FORMATS = 4;
public:
explicit RenderpassCache(const Instance& instance, Scheduler& scheduler);
~RenderpassCache();
explicit RenderManager(const Instance& instance, Scheduler& scheduler);
~RenderManager();
/// Begins a new renderpass with the provided framebuffer as render target.
void BeginRendering(const Framebuffer* framebuffer, Common::Rectangle<u32> draw_rect);

View File

@@ -4,6 +4,7 @@
#include <cstddef>
#include <optional>
#include <unordered_map>
#include "video_core/renderer_vulkan/vk_instance.h"
#include "video_core/renderer_vulkan/vk_master_semaphore.h"
#include "video_core/renderer_vulkan/vk_resource_pool.h"
@@ -14,9 +15,7 @@ ResourcePool::ResourcePool(MasterSemaphore* master_semaphore_, std::size_t grow_
: master_semaphore{master_semaphore_}, grow_step{grow_step_} {}
std::size_t ResourcePool::CommitResource() {
// Refresh semaphore to query updated results
master_semaphore->Refresh();
const u64 gpu_tick = master_semaphore->KnownGpuTick();
u64 gpu_tick = master_semaphore->KnownGpuTick();
const auto search = [this, gpu_tick](std::size_t begin,
std::size_t end) -> std::optional<std::size_t> {
for (std::size_t iterator = begin; iterator < end; ++iterator) {
@@ -29,7 +28,13 @@ std::size_t ResourcePool::CommitResource() {
};
// Try to find a free resource from the hinted position to the end.
std::optional<std::size_t> found = search(hint_iterator, ticks.size());
auto found = search(hint_iterator, ticks.size());
if (!found) {
// Refresh semaphore to query updated results
master_semaphore->Refresh();
gpu_tick = master_semaphore->KnownGpuTick();
found = search(hint_iterator, ticks.size());
}
if (!found) {
// Search from beginning to the hinted position.
found = search(0, hint_iterator);
@@ -48,75 +53,137 @@ std::size_t ResourcePool::CommitResource() {
}
std::size_t ResourcePool::ManageOverflow() {
const std::size_t old_capacity = ticks.size();
Grow();
// The last entry is guaranted to be free, since it's the first element of the freshly
// allocated resources.
return old_capacity;
}
void ResourcePool::Grow() {
const std::size_t old_capacity = ticks.size();
ticks.resize(old_capacity + grow_step);
Allocate(old_capacity, old_capacity + grow_step);
return old_capacity;
}
constexpr std::size_t COMMAND_BUFFER_POOL_SIZE = 4;
struct CommandPool::Pool {
vk::CommandPool handle;
std::array<vk::CommandBuffer, COMMAND_BUFFER_POOL_SIZE> cmdbufs;
};
CommandPool::CommandPool(const Instance& instance, MasterSemaphore* master_semaphore)
: ResourcePool{master_semaphore, COMMAND_BUFFER_POOL_SIZE}, instance{instance} {}
CommandPool::~CommandPool() {
vk::Device device = instance.GetDevice();
for (Pool& pool : pools) {
device.destroyCommandPool(pool.handle);
}
}
void CommandPool::Allocate(std::size_t begin, std::size_t end) {
// Command buffers are going to be commited, recorded, executed every single usage cycle.
// They are also going to be reseted when commited.
Pool& pool = pools.emplace_back();
: ResourcePool{master_semaphore, COMMAND_BUFFER_POOL_SIZE}, instance{instance} {
const vk::CommandPoolCreateInfo pool_create_info = {
.flags = vk::CommandPoolCreateFlagBits::eTransient |
vk::CommandPoolCreateFlagBits::eResetCommandBuffer,
.queueFamilyIndex = instance.GetGraphicsQueueFamilyIndex(),
};
const vk::Device device = instance.GetDevice();
cmd_pool = device.createCommandPoolUnique(pool_create_info);
if (instance.HasDebuggingToolAttached()) {
SetObjectName(device, *cmd_pool, "CommandPool");
}
}
vk::Device device = instance.GetDevice();
pool.handle = device.createCommandPool(pool_create_info);
CommandPool::~CommandPool() = default;
void CommandPool::Allocate(std::size_t begin, std::size_t end) {
cmd_buffers.resize(end);
const vk::CommandBufferAllocateInfo buffer_alloc_info = {
.commandPool = pool.handle,
.commandPool = *cmd_pool,
.level = vk::CommandBufferLevel::ePrimary,
.commandBufferCount = COMMAND_BUFFER_POOL_SIZE,
};
auto buffers = device.allocateCommandBuffers(buffer_alloc_info);
std::copy(buffers.begin(), buffers.end(), pool.cmdbufs.begin());
const vk::Device device = instance.GetDevice();
const auto result =
device.allocateCommandBuffers(&buffer_alloc_info, cmd_buffers.data() + begin);
ASSERT(result == vk::Result::eSuccess);
if (instance.HasDebuggingToolAttached()) {
Vulkan::SetObjectName(device, pool.handle, "CommandPool: Pool({})",
COMMAND_BUFFER_POOL_SIZE);
for (u32 i = 0; i < pool.cmdbufs.size(); ++i) {
Vulkan::SetObjectName(device, pool.cmdbufs[i], "CommandPool: Command Buffer {}", i);
for (std::size_t i = begin; i < end; ++i) {
SetObjectName(device, cmd_buffers[i], "CommandPool: Command Buffer {}", i);
}
}
}
vk::CommandBuffer CommandPool::Commit() {
const std::size_t index = CommitResource();
const auto pool_index = index / COMMAND_BUFFER_POOL_SIZE;
const auto sub_index = index % COMMAND_BUFFER_POOL_SIZE;
return pools[pool_index].cmdbufs[sub_index];
return cmd_buffers[index];
}
constexpr u32 DESCRIPTOR_SET_BATCH = 32;
DescriptorHeap::DescriptorHeap(const Instance& instance, MasterSemaphore* master_semaphore,
std::span<const vk::DescriptorSetLayoutBinding> bindings,
u32 descriptor_heap_count_)
: ResourcePool{master_semaphore, DESCRIPTOR_SET_BATCH}, device{instance.GetDevice()},
descriptor_heap_count{descriptor_heap_count_} {
// Create descriptor set layout.
const vk::DescriptorSetLayoutCreateInfo layout_ci = {
.bindingCount = static_cast<u32>(bindings.size()),
.pBindings = bindings.data(),
};
descriptor_set_layout = device.createDescriptorSetLayoutUnique(layout_ci);
if (instance.HasDebuggingToolAttached()) {
SetObjectName(device, *descriptor_set_layout, "DescriptorSetLayout");
}
// Build descriptor set pool counts.
std::unordered_map<vk::DescriptorType, u16> descriptor_type_counts;
for (const auto& binding : bindings) {
descriptor_type_counts[binding.descriptorType] += binding.descriptorCount;
}
for (const auto& [type, count] : descriptor_type_counts) {
auto& pool_size = pool_sizes.emplace_back();
pool_size.descriptorCount = count * descriptor_heap_count;
pool_size.type = type;
}
// Create descriptor pool
AppendDescriptorPool();
}
DescriptorHeap::~DescriptorHeap() = default;
void DescriptorHeap::Allocate(std::size_t begin, std::size_t end) {
ASSERT(end - begin == DESCRIPTOR_SET_BATCH);
descriptor_sets.resize(end);
std::array<vk::DescriptorSetLayout, DESCRIPTOR_SET_BATCH> layouts;
layouts.fill(*descriptor_set_layout);
u32 current_pool = 0;
vk::DescriptorSetAllocateInfo alloc_info = {
.descriptorPool = *pools[current_pool],
.descriptorSetCount = DESCRIPTOR_SET_BATCH,
.pSetLayouts = layouts.data(),
};
// Attempt to allocate the descriptor set batch. If the pool has run out of space, use a new
// one.
while (true) {
const auto result =
device.allocateDescriptorSets(&alloc_info, descriptor_sets.data() + begin);
if (result == vk::Result::eSuccess) {
break;
}
if (result == vk::Result::eErrorOutOfPoolMemory) {
current_pool++;
if (current_pool == pools.size()) {
LOG_INFO(Render_Vulkan, "Run out of pools, creating new one!");
AppendDescriptorPool();
}
alloc_info.descriptorPool = *pools[current_pool];
}
}
}
vk::DescriptorSet DescriptorHeap::Commit() {
const std::size_t index = CommitResource();
return descriptor_sets[index];
}
void DescriptorHeap::AppendDescriptorPool() {
const vk::DescriptorPoolCreateInfo pool_info = {
.flags = vk::DescriptorPoolCreateFlagBits::eFreeDescriptorSet,
.maxSets = descriptor_heap_count,
.poolSizeCount = static_cast<u32>(pool_sizes.size()),
.pPoolSizes = pool_sizes.data(),
};
auto& pool = pools.emplace_back();
pool = device.createDescriptorPoolUnique(pool_info);
}
} // namespace Vulkan

View File

@@ -39,9 +39,6 @@ private:
/// Manages pool overflow allocating new resources.
std::size_t ManageOverflow();
/// Allocates a new page of resources.
void Grow();
protected:
MasterSemaphore* master_semaphore{nullptr};
std::size_t grow_step = 0; ///< Number of new resources created after an overflow
@@ -59,9 +56,36 @@ public:
vk::CommandBuffer Commit();
private:
struct Pool;
const Instance& instance;
std::vector<Pool> pools;
vk::UniqueCommandPool cmd_pool;
std::vector<vk::CommandBuffer> cmd_buffers;
};
class DescriptorHeap final : public ResourcePool {
public:
explicit DescriptorHeap(const Instance& instance, MasterSemaphore* master_semaphore,
std::span<const vk::DescriptorSetLayoutBinding> bindings,
u32 descriptor_heap_count = 1024);
~DescriptorHeap() override;
const vk::DescriptorSetLayout& Layout() const {
return *descriptor_set_layout;
}
void Allocate(std::size_t begin, std::size_t end) override;
vk::DescriptorSet Commit();
private:
void AppendDescriptorPool();
private:
vk::Device device;
vk::UniqueDescriptorSetLayout descriptor_set_layout;
u32 descriptor_heap_count;
std::vector<vk::DescriptorPoolSize> pool_sizes;
std::vector<vk::UniqueDescriptorPool> pools;
std::vector<vk::DescriptorSet> descriptor_sets;
};
} // namespace Vulkan

View File

@@ -5,10 +5,8 @@
#include <mutex>
#include <utility>
#include "common/microprofile.h"
#include "common/settings.h"
#include "common/thread.h"
#include "video_core/renderer_vulkan/vk_instance.h"
#include "video_core/renderer_vulkan/vk_renderpass_cache.h"
#include "video_core/renderer_vulkan/vk_scheduler.h"
MICROPROFILE_DEFINE(Vulkan_WaitForWorker, "Vulkan", "Wait for worker", MP_RGB(255, 192, 192));
@@ -98,6 +96,8 @@ void Scheduler::DispatchWork() {
return;
}
on_dispatch();
{
std::scoped_lock ql{queue_mutex};
work_queue.push(std::move(chunk));
@@ -173,12 +173,16 @@ void Scheduler::SubmitExecution(vk::Semaphore signal_semaphore, vk::Semaphore wa
state = StateFlags::AllDirty;
const u64 signal_value = master_semaphore->NextTick();
on_submit();
Record([signal_semaphore, wait_semaphore, signal_value, this](vk::CommandBuffer cmdbuf) {
MICROPROFILE_SCOPE(Vulkan_Submit);
std::scoped_lock lock{submit_mutex};
master_semaphore->SubmitWork(cmdbuf, wait_semaphore, signal_semaphore, signal_value);
});
master_semaphore->Refresh();
if (!use_worker_thread) {
AllocateWorkerCommandBuffers();
} else {

View File

@@ -4,6 +4,7 @@
#pragma once
#include <functional>
#include <memory>
#include <utility>
#include "common/alignment.h"
@@ -49,11 +50,6 @@ public:
/// Records the command to the current chunk.
template <typename T>
void Record(T&& command) {
if (!use_worker_thread) {
command(current_cmdbuf);
return;
}
if (chunk->Record(command)) {
return;
}
@@ -76,6 +72,16 @@ public:
return False(state & flag);
}
/// Registers a callback to perform on queue submission.
void RegisterOnSubmit(std::function<void()>&& func) {
on_submit = std::move(func);
}
/// Registers a callback to perform on queue submission.
void RegisterOnDispatch(std::function<void()>&& func) {
on_dispatch = std::move(func);
}
/// Returns the current command buffer tick.
[[nodiscard]] u64 CurrentTick() const noexcept {
return master_semaphore->CurrentTick();
@@ -194,6 +200,8 @@ private:
std::vector<std::unique_ptr<CommandChunk>> chunk_reserve;
vk::CommandBuffer current_cmdbuf;
StateFlags state{};
std::function<void()> on_submit;
std::function<void()> on_dispatch;
std::mutex execution_mutex;
std::mutex reserve_mutex;
std::mutex queue_mutex;

View File

@@ -182,6 +182,7 @@ vk::ShaderModule Compile(std::string_view code, vk::ShaderStageFlagBits stage, v
includer)) [[unlikely]] {
LOG_INFO(Render_Vulkan, "Shader Info Log:\n{}\n{}", shader->getInfoLog(),
shader->getInfoDebugLog());
LOG_INFO(Render_Vulkan, "Shader Source:\n{}", code);
return {};
}

View File

@@ -82,7 +82,7 @@ StreamBuffer::~StreamBuffer() {
device.freeMemory(memory);
}
std::tuple<u8*, u64, bool> StreamBuffer::Map(u64 size, u64 alignment) {
std::tuple<u8*, u32, bool> StreamBuffer::Map(u32 size, u64 alignment) {
if (!is_coherent && type == BufferType::Stream) {
size = Common::AlignUp(size, instance.NonCoherentAtomSize());
}
@@ -114,7 +114,7 @@ std::tuple<u8*, u64, bool> StreamBuffer::Map(u64 size, u64 alignment) {
return std::make_tuple(mapped + offset, offset, invalidate);
}
void StreamBuffer::Commit(u64 size) {
void StreamBuffer::Commit(u32 size) {
if (!is_coherent && type == BufferType::Stream) {
size = Common::AlignUp(size, instance.NonCoherentAtomSize());
}
@@ -200,11 +200,10 @@ void StreamBuffer::CreateBuffers(u64 prefered_size) {
mapped = reinterpret_cast<u8*>(device.mapMemory(memory, 0, VK_WHOLE_SIZE));
if (instance.HasDebuggingToolAttached()) {
Vulkan::SetObjectName(device, buffer, "StreamBuffer({}): {} KiB {}", BufferTypeName(type),
stream_buffer_size / 1024, vk::to_string(mem_type.propertyFlags));
Vulkan::SetObjectName(device, memory, "StreamBufferMemory({}): {} Kib {}",
BufferTypeName(type), stream_buffer_size / 1024,
vk::to_string(mem_type.propertyFlags));
SetObjectName(device, buffer, "StreamBuffer({}): {} KiB {}", BufferTypeName(type),
stream_buffer_size / 1024, vk::to_string(mem_type.propertyFlags));
SetObjectName(device, memory, "StreamBufferMemory({}): {} Kib {}", BufferTypeName(type),
stream_buffer_size / 1024, vk::to_string(mem_type.propertyFlags));
}
}

View File

@@ -35,10 +35,10 @@ public:
* @param size Size to reserve.
* @returns A pair of a raw memory pointer (with offset added), and the buffer offset
*/
std::tuple<u8*, u64, bool> Map(u64 size, u64 alignment);
std::tuple<u8*, u32, bool> Map(u32 size, u64 alignment);
/// Ensures that "size" bytes of memory are available to the GPU, potentially recording a copy.
void Commit(u64 size);
void Commit(u32 size);
vk::Buffer Handle() const noexcept {
return buffer;
@@ -70,8 +70,8 @@ private:
vk::BufferUsageFlags usage{};
BufferType type;
u64 offset{}; ///< Buffer iterator.
u64 mapped_size{}; ///< Size reserved for the current copy.
u32 offset{}; ///< Buffer iterator.
u32 mapped_size{}; ///< Size reserved for the current copy.
bool is_coherent{}; ///< True if the buffer is coherent
std::vector<Watch> current_watches; ///< Watches recorded in the current iteration.

View File

@@ -250,10 +250,8 @@ void Swapchain::RefreshSemaphores() {
if (instance.HasDebuggingToolAttached()) {
for (u32 i = 0; i < image_count; ++i) {
Vulkan::SetObjectName(device, image_acquired[i],
"Swapchain Semaphore: image_acquired {}", i);
Vulkan::SetObjectName(device, present_ready[i], "Swapchain Semaphore: present_ready {}",
i);
SetObjectName(device, image_acquired[i], "Swapchain Semaphore: image_acquired {}", i);
SetObjectName(device, present_ready[i], "Swapchain Semaphore: present_ready {}", i);
}
}
}
@@ -265,7 +263,7 @@ void Swapchain::SetupImages() {
if (instance.HasDebuggingToolAttached()) {
for (u32 i = 0; i < image_count; ++i) {
Vulkan::SetObjectName(device, images[i], "Swapchain Image {}", i);
SetObjectName(device, images[i], "Swapchain Image {}", i);
}
}
}

View File

@@ -3,6 +3,7 @@
// Refer to the license.txt file included.
#include <boost/container/small_vector.hpp>
#include <boost/container/static_vector.hpp>
#include "common/literals.h"
#include "common/microprofile.h"
@@ -11,9 +12,8 @@
#include "video_core/rasterizer_cache/texture_codec.h"
#include "video_core/rasterizer_cache/utils.h"
#include "video_core/renderer_vulkan/pica_to_vk.h"
#include "video_core/renderer_vulkan/vk_descriptor_pool.h"
#include "video_core/renderer_vulkan/vk_instance.h"
#include "video_core/renderer_vulkan/vk_renderpass_cache.h"
#include "video_core/renderer_vulkan/vk_render_manager.h"
#include "video_core/renderer_vulkan/vk_scheduler.h"
#include "video_core/renderer_vulkan/vk_texture_runtime.h"
@@ -119,9 +119,9 @@ u32 UnpackDepthStencil(const VideoCore::StagingData& data, vk::Format dest) {
}
boost::container::small_vector<vk::ImageMemoryBarrier, 3> MakeInitBarriers(
vk::ImageAspectFlags aspect, std::span<const vk::Image> images, std::size_t num_images) {
vk::ImageAspectFlags aspect, std::span<const vk::Image> images) {
boost::container::small_vector<vk::ImageMemoryBarrier, 3> barriers;
for (std::size_t i = 0; i < num_images; i++) {
for (const vk::Image& image : images) {
barriers.push_back(vk::ImageMemoryBarrier{
.srcAccessMask = vk::AccessFlagBits::eNone,
.dstAccessMask = vk::AccessFlagBits::eNone,
@@ -129,7 +129,7 @@ boost::container::small_vector<vk::ImageMemoryBarrier, 3> MakeInitBarriers(
.newLayout = vk::ImageLayout::eGeneral,
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.image = images[i],
.image = image,
.subresourceRange{
.aspectMask = aspect,
.baseMipLevel = 0,
@@ -206,9 +206,9 @@ Handle MakeHandle(const Instance* instance, u32 width, u32 height, u32 levels, T
vk::UniqueImageView image_view = instance->GetDevice().createImageViewUnique(view_info);
if (!debug_name.empty() && instance->HasDebuggingToolAttached()) {
Vulkan::SetObjectName(instance->GetDevice(), image, debug_name);
Vulkan::SetObjectName(instance->GetDevice(), image_view.get(), "{} View({})", debug_name,
vk::to_string(aspect));
SetObjectName(instance->GetDevice(), image, debug_name);
SetObjectName(instance->GetDevice(), image_view.get(), "{} View({})", debug_name,
vk::to_string(aspect));
}
return Handle{
@@ -219,11 +219,10 @@ Handle MakeHandle(const Instance* instance, u32 width, u32 height, u32 levels, T
}
vk::UniqueFramebuffer MakeFramebuffer(vk::Device device, vk::RenderPass render_pass, u32 width,
u32 height, std::span<const vk::ImageView> attachments,
u32 num_attachments) {
u32 height, std::span<const vk::ImageView> attachments) {
const vk::FramebufferCreateInfo framebuffer_info = {
.renderPass = render_pass,
.attachmentCount = num_attachments,
.attachmentCount = static_cast<u32>(attachments.size()),
.pAttachments = attachments.data(),
.width = width,
.height = height,
@@ -249,10 +248,10 @@ constexpr u64 DOWNLOAD_BUFFER_SIZE = 16_MiB;
} // Anonymous namespace
TextureRuntime::TextureRuntime(const Instance& instance, Scheduler& scheduler,
RenderpassCache& renderpass_cache, DescriptorPool& pool,
DescriptorSetProvider& texture_provider_, u32 num_swapchain_images_)
: instance{instance}, scheduler{scheduler}, renderpass_cache{renderpass_cache},
texture_provider{texture_provider_}, blit_helper{instance, scheduler, pool, renderpass_cache},
RenderManager& render_manager, DescriptorUpdateQueue& update_queue,
u32 num_swapchain_images_)
: instance{instance}, scheduler{scheduler}, render_manager{render_manager},
blit_helper{instance, scheduler, render_manager, update_queue},
upload_buffer{instance, scheduler, vk::BufferUsageFlagBits::eTransferSrc, UPLOAD_BUFFER_SIZE,
BufferType::Upload},
download_buffer{instance, scheduler,
@@ -268,7 +267,7 @@ VideoCore::StagingData TextureRuntime::FindStaging(u32 size, bool upload) {
const auto [data, offset, invalidate] = buffer.Map(size, 16);
return VideoCore::StagingData{
.size = size,
.offset = static_cast<u32>(offset),
.offset = offset,
.mapped = std::span{data, size},
};
}
@@ -305,7 +304,7 @@ bool TextureRuntime::Reinterpret(Surface& source, Surface& dest,
}
bool TextureRuntime::ClearTexture(Surface& surface, const VideoCore::TextureClear& clear) {
renderpass_cache.EndRendering();
render_manager.EndRendering();
const RecordParams params = {
.aspect = surface.Aspect(),
@@ -377,7 +376,7 @@ void TextureRuntime::ClearTextureWithRenderpass(Surface& surface,
const auto color_format = is_color ? surface.pixel_format : PixelFormat::Invalid;
const auto depth_format = is_color ? PixelFormat::Invalid : surface.pixel_format;
const auto render_pass = renderpass_cache.GetRenderpass(color_format, depth_format, true);
const auto render_pass = render_manager.GetRenderpass(color_format, depth_format, true);
const RecordParams params = {
.aspect = surface.Aspect(),
@@ -453,8 +452,8 @@ void TextureRuntime::ClearTextureWithRenderpass(Surface& surface,
}
bool TextureRuntime::CopyTextures(Surface& source, Surface& dest,
const VideoCore::TextureCopy& copy) {
renderpass_cache.EndRendering();
std::span<const VideoCore::TextureCopy> copies) {
render_manager.EndRendering();
const RecordParams params = {
.aspect = source.Aspect(),
@@ -466,8 +465,9 @@ bool TextureRuntime::CopyTextures(Surface& source, Surface& dest,
.dst_image = dest.Image(),
};
scheduler.Record([params, copy](vk::CommandBuffer cmdbuf) {
const vk::ImageCopy image_copy = {
boost::container::small_vector<vk::ImageCopy, 2> vk_copies;
std::ranges::transform(copies, std::back_inserter(vk_copies), [&](const auto& copy) {
return vk::ImageCopy{
.srcSubresource{
.aspectMask = params.aspect,
.mipLevel = copy.src_level,
@@ -486,7 +486,9 @@ bool TextureRuntime::CopyTextures(Surface& source, Surface& dest,
0},
.extent = {copy.extent.width, copy.extent.height, 1},
};
});
scheduler.Record([params, copies = std::move(vk_copies)](vk::CommandBuffer cmdbuf) {
const bool self_copy = params.src_image == params.dst_image;
const vk::ImageLayout new_src_layout =
self_copy ? vk::ImageLayout::eGeneral : vk::ImageLayout::eTransferSrcOptimal;
@@ -502,7 +504,7 @@ bool TextureRuntime::CopyTextures(Surface& source, Surface& dest,
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.image = params.src_image,
.subresourceRange = MakeSubresourceRange(params.aspect, copy.src_level),
.subresourceRange = MakeSubresourceRange(params.aspect, 0, VK_REMAINING_MIP_LEVELS),
},
vk::ImageMemoryBarrier{
.srcAccessMask = params.dst_access,
@@ -512,7 +514,7 @@ bool TextureRuntime::CopyTextures(Surface& source, Surface& dest,
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.image = params.dst_image,
.subresourceRange = MakeSubresourceRange(params.aspect, copy.dst_level),
.subresourceRange = MakeSubresourceRange(params.aspect, 0, VK_REMAINING_MIP_LEVELS),
},
};
const std::array post_barriers = {
@@ -524,7 +526,7 @@ bool TextureRuntime::CopyTextures(Surface& source, Surface& dest,
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.image = params.src_image,
.subresourceRange = MakeSubresourceRange(params.aspect, copy.src_level),
.subresourceRange = MakeSubresourceRange(params.aspect, 0, VK_REMAINING_MIP_LEVELS),
},
vk::ImageMemoryBarrier{
.srcAccessMask = vk::AccessFlagBits::eTransferWrite,
@@ -534,7 +536,7 @@ bool TextureRuntime::CopyTextures(Surface& source, Surface& dest,
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
.image = params.dst_image,
.subresourceRange = MakeSubresourceRange(params.aspect, copy.dst_level),
.subresourceRange = MakeSubresourceRange(params.aspect, 0, VK_REMAINING_MIP_LEVELS),
},
};
@@ -542,7 +544,7 @@ bool TextureRuntime::CopyTextures(Surface& source, Surface& dest,
vk::DependencyFlagBits::eByRegion, {}, {}, pre_barriers);
cmdbuf.copyImage(params.src_image, new_src_layout, params.dst_image, new_dst_layout,
image_copy);
copies);
cmdbuf.pipelineBarrier(vk::PipelineStageFlagBits::eTransfer, params.pipeline_flags,
vk::DependencyFlagBits::eByRegion, {}, {}, post_barriers);
@@ -559,7 +561,7 @@ bool TextureRuntime::BlitTextures(Surface& source, Surface& dest,
return blit_helper.BlitDepthStencil(source, dest, blit);
}
renderpass_cache.EndRendering();
render_manager.EndRendering();
const RecordParams params = {
.aspect = source.Aspect(),
@@ -667,7 +669,7 @@ void TextureRuntime::GenerateMipmaps(Surface& surface) {
return;
}
renderpass_cache.EndRendering();
render_manager.EndRendering();
auto [width, height] = surface.RealExtent();
const u32 levels = surface.levels;
@@ -694,13 +696,6 @@ bool TextureRuntime::NeedsConversion(VideoCore::PixelFormat format) const {
traits.aspect != (vk::ImageAspectFlagBits::eDepth | vk::ImageAspectFlagBits::eStencil);
}
void TextureRuntime::FreeDescriptorSetsWithImage(vk::ImageView image_view) {
texture_provider.FreeWithImage(image_view);
blit_helper.compute_provider.FreeWithImage(image_view);
blit_helper.compute_buffer_provider.FreeWithImage(image_view);
blit_helper.two_textures_provider.FreeWithImage(image_view);
}
Surface::Surface(TextureRuntime& runtime_, const VideoCore::SurfaceParams& params)
: SurfaceBase{params}, runtime{&runtime_}, instance{&runtime_.GetInstance()},
scheduler{&runtime_.GetScheduler()}, traits{instance->GetTraits(pixel_format)} {
@@ -715,8 +710,7 @@ Surface::Surface(TextureRuntime& runtime_, const VideoCore::SurfaceParams& param
ASSERT_MSG(format != vk::Format::eUndefined && levels >= 1,
"Image allocation parameters are invalid");
u32 num_images = 0;
std::array<vk::Image, 3> raw_images;
boost::container::static_vector<vk::Image, 3> raw_images;
vk::ImageCreateFlags flags{};
if (texture_type == VideoCore::TextureType::CubeMap) {
@@ -729,18 +723,18 @@ Surface::Surface(TextureRuntime& runtime_, const VideoCore::SurfaceParams& param
const bool need_format_list = is_mutable && instance->IsImageFormatListSupported();
handles[0] = MakeHandle(instance, width, height, levels, texture_type, format, traits.usage,
flags, traits.aspect, need_format_list, DebugName(false));
raw_images[num_images++] = handles[0].image;
raw_images.emplace_back(handles[0].image);
if (res_scale != 1) {
handles[1] =
MakeHandle(instance, GetScaledWidth(), GetScaledHeight(), levels, texture_type, format,
traits.usage, flags, traits.aspect, need_format_list, DebugName(true));
raw_images[num_images++] = handles[1].image;
raw_images.emplace_back(handles[1].image);
}
runtime->renderpass_cache.EndRendering();
scheduler->Record([raw_images, num_images, aspect = traits.aspect](vk::CommandBuffer cmdbuf) {
const auto barriers = MakeInitBarriers(aspect, raw_images, num_images);
runtime->render_manager.EndRendering();
scheduler->Record([raw_images, aspect = traits.aspect](vk::CommandBuffer cmdbuf) {
const auto barriers = MakeInitBarriers(aspect, raw_images);
cmdbuf.pipelineBarrier(vk::PipelineStageFlagBits::eTopOfPipe,
vk::PipelineStageFlagBits::eTopOfPipe,
vk::DependencyFlagBits::eByRegion, {}, {}, barriers);
@@ -758,8 +752,7 @@ Surface::Surface(TextureRuntime& runtime_, const VideoCore::SurfaceBase& surface
const bool has_normal = mat && mat->Map(MapType::Normal);
const vk::Format format = traits.native;
u32 num_images = 0;
std::array<vk::Image, 2> raw_images;
boost::container::static_vector<vk::Image, 2> raw_images;
vk::ImageCreateFlags flags{};
if (texture_type == VideoCore::TextureType::CubeMap) {
@@ -769,23 +762,23 @@ Surface::Surface(TextureRuntime& runtime_, const VideoCore::SurfaceBase& surface
const std::string debug_name = DebugName(false, true);
handles[0] = MakeHandle(instance, mat->width, mat->height, levels, texture_type, format,
traits.usage, flags, traits.aspect, false, debug_name);
raw_images[num_images++] = handles[0].image;
raw_images.emplace_back(handles[0].image);
if (res_scale != 1) {
handles[1] = MakeHandle(instance, mat->width, mat->height, levels, texture_type,
vk::Format::eR8G8B8A8Unorm, traits.usage, flags, traits.aspect,
false, debug_name);
raw_images[num_images++] = handles[1].image;
raw_images.emplace_back(handles[1].image);
}
if (has_normal) {
handles[2] = MakeHandle(instance, mat->width, mat->height, levels, texture_type, format,
traits.usage, flags, traits.aspect, false, debug_name);
raw_images[num_images++] = handles[2].image;
raw_images.emplace_back(handles[2].image);
}
runtime->renderpass_cache.EndRendering();
scheduler->Record([raw_images, num_images, aspect = traits.aspect](vk::CommandBuffer cmdbuf) {
const auto barriers = MakeInitBarriers(aspect, raw_images, num_images);
runtime->render_manager.EndRendering();
scheduler->Record([raw_images, aspect = traits.aspect](vk::CommandBuffer cmdbuf) {
const auto barriers = MakeInitBarriers(aspect, raw_images);
cmdbuf.pipelineBarrier(vk::PipelineStageFlagBits::eTopOfPipe,
vk::PipelineStageFlagBits::eTopOfPipe,
vk::DependencyFlagBits::eByRegion, {}, {}, barriers);
@@ -800,9 +793,6 @@ Surface::~Surface() {
return;
}
for (const auto& [alloc, image, image_view] : handles) {
if (image_view) {
runtime->FreeDescriptorSetsWithImage(*image_view);
}
if (image) {
vmaDestroyImage(instance->GetAllocator(), image, alloc);
}
@@ -814,7 +804,7 @@ Surface::~Surface() {
void Surface::Upload(const VideoCore::BufferTextureCopy& upload,
const VideoCore::StagingData& staging) {
runtime->renderpass_cache.EndRendering();
runtime->render_manager.EndRendering();
const RecordParams params = {
.aspect = Aspect(),
@@ -825,11 +815,10 @@ void Surface::Upload(const VideoCore::BufferTextureCopy& upload,
scheduler->Record([buffer = runtime->upload_buffer.Handle(), format = traits.native, params,
staging, upload](vk::CommandBuffer cmdbuf) {
u32 num_copies = 1;
std::array<vk::BufferImageCopy, 2> buffer_image_copies;
boost::container::static_vector<vk::BufferImageCopy, 2> buffer_image_copies;
const auto rect = upload.texture_rect;
buffer_image_copies[0] = vk::BufferImageCopy{
buffer_image_copies.emplace_back(vk::BufferImageCopy{
.bufferOffset = upload.buffer_offset,
.bufferRowLength = rect.GetWidth(),
.bufferImageHeight = rect.GetHeight(),
@@ -841,15 +830,16 @@ void Surface::Upload(const VideoCore::BufferTextureCopy& upload,
},
.imageOffset = {static_cast<s32>(rect.left), static_cast<s32>(rect.bottom), 0},
.imageExtent = {rect.GetWidth(), rect.GetHeight(), 1},
};
});
if (params.aspect & vk::ImageAspectFlagBits::eStencil) {
buffer_image_copies[0].imageSubresource.aspectMask = vk::ImageAspectFlagBits::eDepth;
vk::BufferImageCopy& stencil_copy = buffer_image_copies[1];
vk::BufferImageCopy& stencil_copy =
buffer_image_copies.emplace_back(buffer_image_copies[0]);
stencil_copy = buffer_image_copies[0];
stencil_copy.bufferOffset += UnpackDepthStencil(staging, format);
stencil_copy.imageSubresource.aspectMask = vk::ImageAspectFlagBits::eStencil;
num_copies++;
}
const vk::ImageMemoryBarrier read_barrier = {
@@ -877,7 +867,7 @@ void Surface::Upload(const VideoCore::BufferTextureCopy& upload,
vk::DependencyFlagBits::eByRegion, {}, {}, read_barrier);
cmdbuf.copyBufferToImage(buffer, params.src_image, vk::ImageLayout::eTransferDstOptimal,
num_copies, buffer_image_copies.data());
buffer_image_copies);
cmdbuf.pipelineBarrier(vk::PipelineStageFlagBits::eTransfer, params.pipeline_flags,
vk::DependencyFlagBits::eByRegion, {}, {}, write_barrier);
@@ -904,7 +894,7 @@ void Surface::UploadCustom(const VideoCore::Material* material, u32 level) {
const Common::Rectangle rect{0U, height, width, 0U};
const auto upload = [&](u32 index, VideoCore::CustomTexture* texture) {
const u64 custom_size = texture->data.size();
const u32 custom_size = static_cast<u32>(texture->data.size());
const RecordParams params = {
.aspect = vk::ImageAspectFlagBits::eColor,
.pipeline_flags = PipelineStageFlags(),
@@ -982,7 +972,7 @@ void Surface::Download(const VideoCore::BufferTextureCopy& download,
runtime->download_buffer.Commit(staging.size);
});
runtime->renderpass_cache.EndRendering();
runtime->render_manager.EndRendering();
if (pixel_format == PixelFormat::D24S8) {
runtime->blit_helper.DepthToBuffer(*this, runtime->download_buffer.Handle(), download);
@@ -1082,15 +1072,15 @@ void Surface::ScaleUp(u32 new_scale) {
MakeHandle(instance, GetScaledWidth(), GetScaledHeight(), levels, texture_type,
traits.native, traits.usage, flags, traits.aspect, false, DebugName(true));
runtime->renderpass_cache.EndRendering();
runtime->render_manager.EndRendering();
scheduler->Record(
[raw_images = std::array{Image()}, aspect = traits.aspect](vk::CommandBuffer cmdbuf) {
const auto barriers = MakeInitBarriers(aspect, raw_images, raw_images.size());
const auto barriers = MakeInitBarriers(aspect, raw_images);
cmdbuf.pipelineBarrier(vk::PipelineStageFlagBits::eTopOfPipe,
vk::PipelineStageFlagBits::eTopOfPipe,
vk::DependencyFlagBits::eByRegion, {}, {}, barriers);
});
LOG_INFO(HW_GPU, "Surface scale up!");
for (u32 level = 0; level < levels; level++) {
const VideoCore::TextureBlit blit = {
.src_level = level,
@@ -1160,7 +1150,7 @@ vk::ImageView Surface::CopyImageView() noexcept {
copy_layout = vk::ImageLayout::eUndefined;
}
runtime->renderpass_cache.EndRendering();
runtime->render_manager.EndRendering();
const RecordParams params = {
.aspect = Aspect(),
@@ -1348,10 +1338,10 @@ vk::Framebuffer Surface::Framebuffer() noexcept {
const auto color_format = is_depth ? PixelFormat::Invalid : pixel_format;
const auto depth_format = is_depth ? pixel_format : PixelFormat::Invalid;
const auto render_pass =
runtime->renderpass_cache.GetRenderpass(color_format, depth_format, false);
runtime->render_manager.GetRenderpass(color_format, depth_format, false);
const auto attachments = std::array{ImageView()};
framebuffers[index] = MakeFramebuffer(instance->GetDevice(), render_pass, GetScaledWidth(),
GetScaledHeight(), attachments, 1);
GetScaledHeight(), attachments);
return framebuffers[index].get();
}
@@ -1462,7 +1452,7 @@ Framebuffer::Framebuffer(TextureRuntime& runtime, const VideoCore::FramebufferPa
Surface* color, Surface* depth)
: VideoCore::FramebufferParams{params}, res_scale{color ? color->res_scale
: (depth ? depth->res_scale : 1u)} {
auto& renderpass_cache = runtime.GetRenderpassCache();
auto& render_manager = runtime.GetRenderpassCache();
if (shadow_rendering && !color) {
return;
}
@@ -1481,29 +1471,27 @@ Framebuffer::Framebuffer(TextureRuntime& runtime, const VideoCore::FramebufferPa
image_views[index] = shadow_rendering ? surface->StorageView() : surface->FramebufferView();
};
u32 num_attachments = 0;
std::array<vk::ImageView, 2> attachments;
boost::container::static_vector<vk::ImageView, 2> attachments;
if (color) {
prepare(0, color);
attachments[num_attachments++] = image_views[0];
attachments.emplace_back(image_views[0]);
}
if (depth) {
prepare(1, depth);
attachments[num_attachments++] = image_views[1];
attachments.emplace_back(image_views[1]);
}
const vk::Device device = runtime.GetInstance().GetDevice();
if (shadow_rendering) {
render_pass =
renderpass_cache.GetRenderpass(PixelFormat::Invalid, PixelFormat::Invalid, false);
render_manager.GetRenderpass(PixelFormat::Invalid, PixelFormat::Invalid, false);
framebuffer = MakeFramebuffer(device, render_pass, color->GetScaledWidth(),
color->GetScaledHeight(), {}, 0);
color->GetScaledHeight(), {});
} else {
render_pass = renderpass_cache.GetRenderpass(formats[0], formats[1], false);
framebuffer =
MakeFramebuffer(device, render_pass, width, height, attachments, num_attachments);
render_pass = render_manager.GetRenderpass(formats[0], formats[1], false);
framebuffer = MakeFramebuffer(device, render_pass, width, height, attachments);
}
}
@@ -1518,7 +1506,7 @@ Sampler::Sampler(TextureRuntime& runtime, const VideoCore::SamplerParams& params
instance.IsCustomBorderColorSupported() && (params.wrap_s == TextureConfig::ClampToBorder ||
params.wrap_t == TextureConfig::ClampToBorder);
const Common::Vec4f color = PicaToVK::ColorRGBA8(params.border_color);
const auto color = PicaToVK::ColorRGBA8(params.border_color);
const vk::SamplerCustomBorderColorCreateInfoEXT border_color_info = {
.customBorderColor = MakeClearColorValue(color),
.format = vk::Format::eUndefined,

Some files were not shown because too many files have changed in this diff Show More