Compare commits

..

72 Commits

Author SHA1 Message Date
Mykola Grymalyuk
7d3720fc11 CI: Try GitHub runner 2024-09-02 19:06:40 -06:00
Mykola Grymalyuk
8332b714b5 graphics_audio.py: Add AMD Navi patch 2024-09-01 09:54:20 -06:00
neon ball
4add945fa0 Fix typo 2024-08-31 12:18:12 +03:00
neon ball
807d394bdb Fix link 2024-08-31 12:13:17 +03:00
neon ball
0ba78bae68 Fix formatting 2024-08-31 12:08:54 +03:00
neon ball
d73b3dcc69 Add Error Code 71 solution 2024-08-31 12:07:41 +03:00
Mykola Grymalyuk
c8aa13664d Merge pull request #1147 from crystall1nedev/s3x-patch
Increase model range of S1X/S3X kext inclusion
2024-08-22 17:04:21 -06:00
Eva Luna
f32a813a0c Add note in CHANGELOG.md 2024-08-22 19:02:20 -04:00
Eva Luna
2696879109 Add note for S1X/S3X changes and clarify if statement 2024-08-22 18:58:31 -04:00
Eva Isabella Luna
df28ea288a Increase model range of S1X/S3X kext inclusion
While stock systems with S1X/S3X drives only include Broadwell to Kaby Lake Macs, Haswell Macs and MacPro6,1 are able to use these drives as well, causing issues when building OpenCore for those models from a different machine.
2024-08-22 18:18:57 -04:00
Mykola Grymalyuk
fc5b250d41 sys_patch.py: Fix AuxKC check 2024-08-20 15:52:44 -06:00
neon ball
b349459da6 Increase visibility of app requirements and add a note about firmware 2024-08-21 00:44:08 +03:00
Mykola Grymalyuk
be7493f74a macOS Installer: Add handling for reqading Sequoia installer versions 2024-08-20 15:26:14 -06:00
Mykola Grymalyuk
fbe216164a support.py: Ignore non-kext files 2024-08-20 15:25:36 -06:00
Mykola Grymalyuk
258b0309ab Merge pull request #1146 from dortania/kernel-management
Modularize System Volume Patching System
2024-08-14 09:18:38 -06:00
Mykola Grymalyuk
53dd5d3477 Further modularize sys_patch 2024-08-13 13:07:58 -06:00
Mykola Grymalyuk
c4cda81df6 Modularize sys_patch_mount.py 2024-08-12 16:38:05 -06:00
Mykola Grymalyuk
35b365c8ca Rework Kernel Cache management 2024-08-12 15:46:52 -06:00
Mykola Grymalyuk
1653fec580 sys_patch_helpers.py: Use full pathing 2024-08-12 08:37:55 -06:00
Mykola Grymalyuk
e453bd1b51 Sync PatcherSupportPkg 2024-08-11 19:57:39 -06:00
Jazzzny
1a576c72a2 Provide additional resilience in USB detection code (#1144)
* Add fallback, don't bail out

* Part 2

* Part 3

* Fix import

* Move encoding
2024-08-09 18:13:53 -04:00
neon ball
9a55317f86 Fix typo 2024-08-07 21:01:14 +03:00
neon ball
23d7f9f07c Fix some links 2024-08-07 21:00:35 +03:00
Mykola Grymalyuk
5fd7ad0b4b Sync CHANGELOG 2024-08-01 12:44:15 -06:00
Mykola Grymalyuk
b065da6dbf Merge pull request #1143 from dortania/copy-on-write
Implement improved Copy on Write detection
2024-08-01 12:42:04 -06:00
Mykola Grymalyuk
90092a296d Implement getattrlist for improved CoW detection 2024-08-01 11:16:00 -06:00
Mykola Grymalyuk
57356bcceb products.py: Streamline beta removal
Reduce additional loops to clear beta builds
2024-07-31 20:11:05 -06:00
Mykola Grymalyuk
d726851d9c products.py: Add extra sanity check 2024-07-31 10:58:40 -06:00
Mykola Grymalyuk
7897cd14b6 products.py: Work around index being offset on deletion
Resolves non-latest builds appearing in latest dictionary
2024-07-31 10:54:15 -06:00
Mykola Grymalyuk
628fe4f8fc products.py: Verify item exists before removal 2024-07-31 09:05:41 -06:00
Mykola Grymalyuk
a074baa2e9 sys_patch: Remove unused bplist code 2024-07-25 12:19:28 -06:00
Jazzzny
e81c138d2e Update README.md 2024-07-25 13:31:09 -04:00
Jazzzny
aa4fd137d1 Update README.md 2024-07-25 13:16:05 -04:00
neon ball
580fb83b4d Remove repetition and small change 2024-07-22 00:34:33 +03:00
neon ball
de3875279a Swap OS version names from heading to bold 2024-07-22 00:24:24 +03:00
neon ball
cdfefe1612 Add a thing 2024-07-22 00:14:56 +03:00
neon ball
6f7f309a4d Change a bit 2024-07-22 00:13:39 +03:00
neon ball
86a7e306f6 Improve SIP documentation
Previous one was a bit of a jumbled mess, added some cohesiveness and version based information
2024-07-22 00:08:26 +03:00
Mykola Grymalyuk
8d88fbbfa4 Remove unused imports 2024-07-21 13:53:52 -06:00
Mykola Grymalyuk
ae423888d7 Merge pull request #1142 from dortania/sucatalog-rewrite
sucatalog: Implement more robust Software Update Catalog library
2024-07-21 12:16:59 -06:00
Mykola Grymalyuk
4583a743be sucatalog: Publish initial version 2024-07-21 11:54:54 -06:00
neon ball
537853418d Fix typo 2024-07-16 11:41:12 +03:00
neon ball
6603df4cce Fix link v2 2024-07-16 11:39:54 +03:00
neon ball
21e144ee5f Fix link 2024-07-16 11:39:08 +03:00
Dhinak G
18157fe5aa Fix mislabeled MBA Identifiers (#1140)
Co-authored-by: ROSeaboyer <ryan.seaboyer@icloud.com>
2024-07-05 13:42:56 -04:00
Dhinak G
66f0009c65 disk_images.py: Do not include EFI 2024-06-28 12:10:57 -04:00
Dhinak G
49da508bde sys_patch.py: Better wording for staged update sanity check 2024-06-28 12:10:15 -04:00
Mykola Grymalyuk
f46f4cf857 Merge pull request #1138 from dortania/bsd
Mention contributors in license
2024-06-27 13:34:32 -06:00
Jazzzny
4f104de405 Reword 2024-06-27 14:28:27 -04:00
Jazzzny
4f2f9a8763 Merge branch 'main' into bsd 2024-06-27 12:55:16 -04:00
Jazzzny
ceeef7c1a5 Update LICENSE.txt to include individual contributors 2024-06-27 12:51:19 -04:00
neon ball
ded1e8c2c7 Adjust size 2024-06-23 20:30:55 +03:00
neon ball
0f83e77f1a Fix path 2024-06-23 20:28:27 +03:00
neon ball
1fc1595ffb Fix image size 2024-06-23 20:23:49 +03:00
neon ball
1c147819f7 Add troubleshooting section about "Bless failed"
Possible solution to fix "You have no premission to save..." error
2024-06-23 20:18:44 +03:00
neon ball
4aaf658c8f Move sidebar location 2024-06-16 15:01:11 +03:00
neon ball
2fb193692b Update PROCESS.md 2024-06-16 14:55:46 +03:00
neon ball
7f6a2e393c Update PROCESS.md 2024-06-16 14:48:29 +03:00
neon ball
0a48986ddb Fix note again 2024-06-16 14:46:11 +03:00
neon ball
edd9814f12 Fix note 2024-06-16 14:43:47 +03:00
neon ball
f32f94e588 Fix typos 2024-06-16 14:42:21 +03:00
neon ball
5fb4bbf7f4 Add note highlight 2024-06-16 14:40:56 +03:00
neon ball
7d8d9324e0 Rename PROCESS to PROCESS.md 2024-06-16 14:34:37 +03:00
neon ball
f717bdceae Update config.js 2024-06-16 14:26:08 +03:00
neon ball
d015f8d1e4 Create PROCESS 2024-06-16 14:25:47 +03:00
Mykola Grymalyuk
9a2fca8d18 os_data.py: Add macOS Sequoia constant 2024-06-10 11:52:48 -06:00
Mykola Grymalyuk
475b9e793f sys_patch: Fix patches typing 2024-06-08 20:24:04 -06:00
Mykola Grymalyuk
73ce7e5bda package_scripts.py: Adjust formatting 2024-06-02 12:29:56 -06:00
Mykola Grymalyuk
3bffad8001 GUI: Add side spacing for wx.TextCtrl elements 2024-06-02 12:19:44 -06:00
Mykola Grymalyuk
aa40e9328a CI: Programatically create PKG scripts
Additionally move all PKG assets to ci_tooling/pkg_assets
2024-06-02 12:16:25 -06:00
Mykola Grymalyuk
ec103c1d2e Launch Services: Adjust AssociatedBundleIdentifiers 2024-06-02 12:04:26 -06:00
Mykola Grymalyuk
dd88879dc0 Increment version 2024-05-31 11:43:54 -06:00
82 changed files with 3396 additions and 2236 deletions

View File

@@ -9,7 +9,7 @@ on:
jobs: jobs:
build: build:
name: Build wxPython name: Build wxPython
runs-on: x86_64_monterey runs-on: macos-latest
if: github.repository_owner == 'dortania' if: github.repository_owner == 'dortania'
env: env:
@@ -24,9 +24,13 @@ jobs:
# App Signing # App Signing
ORG_MAC_DEVELOPER_ID_APPLICATION_IDENTITY: ${{ secrets.ORG_MAC_DEVELOPER_ID_APPLICATION_IDENTITY }} ORG_MAC_DEVELOPER_ID_APPLICATION_IDENTITY: ${{ secrets.ORG_MAC_DEVELOPER_ID_APPLICATION_IDENTITY }}
ORG_MAC_DEVELOPER_ID_APPLICATION_CERT_P12_BASE64: ${{ secrets.ORG_MAC_DEVELOPER_ID_APPLICATION_CERT_P12_BASE64 }}
ORG_MAC_DEVELOPER_ID_APPLICATION_CERT_P12_PASSWORD: ${{ secrets.ORG_MAC_DEVELOPER_ID_APPLICATION_CERT_P12_PASSWORD }}
# PKG Signing # PKG Signing
ORG_MAC_DEVELOPER_ID_INSTALLER_IDENTITY: ${{ secrets.ORG_MAC_DEVELOPER_ID_INSTALLER_IDENTITY }} ORG_MAC_DEVELOPER_ID_INSTALLER_IDENTITY: ${{ secrets.ORG_MAC_DEVELOPER_ID_INSTALLER_IDENTITY }}
ORG_MAC_DEVELOPER_ID_INSTALLER_CERT_P12_BASE64: ${{ secrets.ORG_MAC_DEVELOPER_ID_INSTALLER_CERT_P12_BASE64 }}
ORG_MAC_DEVELOPER_ID_INSTALLER_CERT_P12_PASSWORD: ${{ secrets.ORG_MAC_DEVELOPER_ID_INSTALLER_CERT_P12_PASSWORD }}
# Notarization # Notarization
ORG_MAC_NOTARIZATION_TEAM_ID: ${{ secrets.ORG_MAC_NOTARIZATION_TEAM_ID }} ORG_MAC_NOTARIZATION_TEAM_ID: ${{ secrets.ORG_MAC_NOTARIZATION_TEAM_ID }}
@@ -36,36 +40,41 @@ jobs:
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
# - name: Import Application Signing Certificate - name: Set up Python 3.11
# uses: dhinakg/import-codesign-certs@master uses: actions/setup-python@v4
# with: with:
# p12-file-base64: ${{ secrets.ORG_MAC_DEVELOPER_ID_APPLICATION_CERT_P12_BASE64 }} python-version: 3.11
# p12-password: ${{ secrets.ORG_MAC_DEVELOPER_ID_APPLICATION_CERT_P12_PASSWORD }}
# - name: Import Installer Signing Certificate - name: Import Application Signing Certificate
# uses: dhinakg/import-codesign-certs@master uses: dhinakg/import-codesign-certs@master
# with: with:
# p12-file-base64: ${{ secrets.ORG_MAC_DEVELOPER_ID_INSTALLER_CERT_P12_BASE64 }} p12-file-base64: ${{ secrets.ORG_MAC_DEVELOPER_ID_APPLICATION_CERT_P12_BASE64 }}
# p12-password: ${{ secrets.ORG_MAC_DEVELOPER_ID_INSTALLER_CERT_P12_PASSWORD }} p12-password: ${{ secrets.ORG_MAC_DEVELOPER_ID_APPLICATION_CERT_P12_PASSWORD }}
# - name: Install Dependencies - name: Import Installer Signing Certificate
# run: /Library/Frameworks/Python.framework/Versions/3.11/bin/python3 -m pip install -r requirements.txt uses: dhinakg/import-codesign-certs@master
with:
p12-file-base64: ${{ secrets.ORG_MAC_DEVELOPER_ID_INSTALLER_CERT_P12_BASE64 }}
p12-password: ${{ secrets.ORG_MAC_DEVELOPER_ID_INSTALLER_CERT_P12_PASSWORD }}
# - name: Force Universal2 charset for Python - name: Install Dependencies
# run: | run: python3 -m pip install -r requirements.txt
# /Library/Frameworks/Python.framework/Versions/3.11/bin/python3 -m pip uninstall -y charset_normalizer
# /Library/Frameworks/Python.framework/Versions/3.11/bin/python3 -m pip download --platform macosx_10_9_universal2 --only-binary=:all: charset-normalizer - name: Force Universal2 charset for Python
# /Library/Frameworks/Python.framework/Versions/3.11/bin/python3 -m pip install charset_normalizer-*-macosx_10_9_universal2.whl run: |
python3 -m pip uninstall -y charset_normalizer
python3 -m pip download --platform macosx_10_9_universal2 --only-binary=:all: charset-normalizer
python3 -m pip install charset_normalizer-*-macosx_10_9_universal2.whl
- name: Prepare Assets (--prepare-assets) - name: Prepare Assets (--prepare-assets)
run: > run: >
/Library/Frameworks/Python.framework/Versions/3.11/bin/python3 Build-Project.command python3 Build-Project.command
--run-as-individual-steps --reset-dmg-cache --run-as-individual-steps --reset-dmg-cache
--prepare-assets --prepare-assets
- name: Prepare Application (--prepare-application) - name: Prepare Application (--prepare-application)
run: > run: >
/Library/Frameworks/Python.framework/Versions/3.11/bin/python3 Build-Project.command python3 Build-Project.command
--application-signing-identity "${{ env.ORG_MAC_DEVELOPER_ID_APPLICATION_IDENTITY }}" --application-signing-identity "${{ env.ORG_MAC_DEVELOPER_ID_APPLICATION_IDENTITY }}"
--notarization-apple-id "${{ env.ORG_MAC_NOTARIZATION_APPLE_ID }}" --notarization-password "${{ env.ORG_MAC_NOTARIZATION_PASSWORD }}" --notarization-team-id "${{ env.ORG_MAC_NOTARIZATION_TEAM_ID }}" --notarization-apple-id "${{ env.ORG_MAC_NOTARIZATION_APPLE_ID }}" --notarization-password "${{ env.ORG_MAC_NOTARIZATION_PASSWORD }}" --notarization-team-id "${{ env.ORG_MAC_NOTARIZATION_TEAM_ID }}"
--git-branch "${{ env.branch }}" --git-commit-url "${{ env.commiturl }}" --git-commit-date "${{ env.commitdate }}" --git-branch "${{ env.branch }}" --git-commit-url "${{ env.commiturl }}" --git-commit-date "${{ env.commitdate }}"
@@ -76,7 +85,7 @@ jobs:
- name: Prepare Package (--prepare-package) - name: Prepare Package (--prepare-package)
run: > run: >
/Library/Frameworks/Python.framework/Versions/3.11/bin/python3 Build-Project.command python3 Build-Project.command
--installer-signing-identity "${{ env.ORG_MAC_DEVELOPER_ID_INSTALLER_IDENTITY }}" --installer-signing-identity "${{ env.ORG_MAC_DEVELOPER_ID_INSTALLER_IDENTITY }}"
--notarization-apple-id "${{ env.ORG_MAC_NOTARIZATION_APPLE_ID }}" --notarization-password "${{ env.ORG_MAC_NOTARIZATION_PASSWORD }}" --notarization-team-id "${{ env.ORG_MAC_NOTARIZATION_TEAM_ID }}" --notarization-apple-id "${{ env.ORG_MAC_NOTARIZATION_APPLE_ID }}" --notarization-password "${{ env.ORG_MAC_NOTARIZATION_PASSWORD }}" --notarization-team-id "${{ env.ORG_MAC_NOTARIZATION_TEAM_ID }}"
--run-as-individual-steps --run-as-individual-steps
@@ -84,7 +93,7 @@ jobs:
- name: Prepare Update Shim (--prepare-shim) - name: Prepare Update Shim (--prepare-shim)
run: > run: >
/Library/Frameworks/Python.framework/Versions/3.11/bin/python3 Build-Project.command python3 Build-Project.command
--application-signing-identity "${{ env.ORG_MAC_DEVELOPER_ID_APPLICATION_IDENTITY }}" --application-signing-identity "${{ env.ORG_MAC_DEVELOPER_ID_APPLICATION_IDENTITY }}"
--notarization-apple-id "${{ env.ORG_MAC_NOTARIZATION_APPLE_ID }}" --notarization-password "${{ env.ORG_MAC_NOTARIZATION_PASSWORD }}" --notarization-team-id "${{ env.ORG_MAC_NOTARIZATION_TEAM_ID }}" --notarization-apple-id "${{ env.ORG_MAC_NOTARIZATION_APPLE_ID }}" --notarization-password "${{ env.ORG_MAC_NOTARIZATION_PASSWORD }}" --notarization-team-id "${{ env.ORG_MAC_NOTARIZATION_TEAM_ID }}"
--run-as-individual-steps --run-as-individual-steps

View File

@@ -1,5 +1,25 @@
# OpenCore Legacy Patcher changelog # OpenCore Legacy Patcher changelog
## 1.6.0
- Set `AssociatedBundleIdentifiers` property in launch services as an array
- Move to auto-generated pre/postinstall scripts for PKGs
- Streamlines PKG creation process, ensuring Install and AutoPKG scripts are always in sync
- Add support for `gktool` in PKG postinstall scripts
- Removes Gatekeeper "verifying" prompt on first launch after PKG installation
- Note `gktool` is only available on macOS Sonoma and newer
- Resolve unpatching crash edge case when host doesn't require patches.
- Implement new Software Update Catalog Parser for macOS Installers
- Implement new Copy on Write detection mechanism for all file copying operations
- Implemented using `getattrlist` and `VOL_CAP_INT_CLONE` flag
- Helps improve performance on APFS volumes
- Increase model range for S1X/S3X patching to include Haswell Macs and `MacPro6,1`
- Helps avoid an issue where older machines with newer, unsupported SSDs would fail to boot
- Only affects building EFI from another machine
- Resolve AMD Navi MXM GPU detection for modded iMac9,x-12,x
- Thanks @Ausdauersportler for the patch!
- Increment Binaries:
- PatcherSupportPkg 1.6.3 - release
## 1.5.0 ## 1.5.0
- Restructure project directories - Restructure project directories
- Python: - Python:

View File

@@ -1,5 +1,4 @@
Copyright (c) 2020-2024, Dhinak G Copyright (c) 2020-2024 Dhinak G, Mykola Grymalyuk, and individual contributors.
Copyright (c) 2020-2024, Mykola Grymalyuk
All rights reserved. All rights reserved.

View File

@@ -100,7 +100,15 @@ To run the project from source, see here: [Build and run from source](./SOURCE.m
* Pre-Ivy Bridge Aquantia Ethernet Patch * Pre-Ivy Bridge Aquantia Ethernet Patch
* Non-Metal Photo Booth Patch for Monterey+ * Non-Metal Photo Booth Patch for Monterey+
* GUI and Backend Development * GUI and Backend Development
* Updater UI
* macOS Downloader UI
* Downloader UI
* USB Top Case probing
* Developer root patching
* Vaulting implementation * Vaulting implementation
* UEFI bootx64.efi research
* universal2 build research
* Various documentation contributions
* Amazing users who've graciously donate hardware: * Amazing users who've graciously donate hardware:
* [JohnD](https://forums.macrumors.com/members/johnd.53633/) - 2013 Mac Pro * [JohnD](https://forums.macrumors.com/members/johnd.53633/) - 2013 Mac Pro
* [SpiGAndromeda](https://github.com/SpiGAndromeda) - AMD Vega 64 * [SpiGAndromeda](https://github.com/SpiGAndromeda) - AMD Vega 64

View File

@@ -1,103 +0,0 @@
#!/bin/zsh --no-rcs
# ------------------------------------------------------
# AutoPkg Assets Postinstall Script
# ------------------------------------------------------
# Create alias for app, start patching and reboot.
# ------------------------------------------------------
# MARK: PackageKit Parameters
# ---------------------------
pathToScript=$0 # ex. /tmp/PKInstallSandbox.*/Scripts/*/preinstall
pathToPackage=$1 # ex. ~/Downloads/Installer.pkg
pathToTargetLocation=$2 # ex. '/', '/Applications', etc (depends on pkgbuild's '--install-location' argument)
pathToTargetVolume=$3 # ex. '/', '/Volumes/MyVolume', etc
pathToStartupDisk=$4 # ex. '/'
# MARK: Variables
# ---------------------------
helperPath="Library/PrivilegedHelperTools/com.dortania.opencore-legacy-patcher.privileged-helper"
mainAppPath="Library/Application Support/Dortania/OpenCore-Patcher.app"
shimAppPath="Applications/OpenCore-Patcher.app"
executablePath="$mainAppPath/Contents/MacOS/OpenCore-Patcher"
# MARK: Functions
# ---------------------------
function _setSUIDBit() {
local binaryPath=$1
echo "Setting SUID bit on: $binaryPath"
# Check if path is a directory
if [[ -d $binaryPath ]]; then
/bin/chmod -R +s $binaryPath
else
/bin/chmod +s $binaryPath
fi
}
function _createAlias() {
local mainPath=$1
local aliasPath=$2
# Check if alias path exists
if [[ -e $aliasPath ]]; then
# Check if alias path is a symbolic link
if [[ -L $aliasPath ]]; then
echo "Removing old symbolic link: $aliasPath"
/bin/rm -f $aliasPath
else
echo "Removing old file: $aliasPath"
/bin/rm -rf $aliasPath
fi
fi
# Create symbolic link
echo "Creating symbolic link: $aliasPath"
/bin/ln -s $mainPath $aliasPath
}
function _startPatching() {
local executable=$1
local logPath=$(_logFile)
# Start patching
"$executable" "--patch_sys_vol" &> $logPath
}
function _logFile() {
echo "/Users/Shared/.OCLP-AutoPatcher-Log-$(/bin/date +"%Y_%m_%d_%I_%M_%p").txt"
}
function _fixSettingsFilePermission() {
local settingsPath="$pathToTargetVolume/Users/Shared/.com.dortania.opencore-legacy-patcher.plist"
if [[ -e $settingsPath ]]; then
echo "Fixing settings file permissions: $settingsPath"
/bin/chmod 666 $settingsPath
fi
}
function _reboot() {
/sbin/reboot
}
function _main() {
_setSUIDBit "$pathToTargetVolume/$helperPath"
_createAlias "$pathToTargetVolume/$mainAppPath" "$pathToTargetVolume/$shimAppPath"
_startPatching "$pathToTargetVolume/$executablePath"
_fixSettingsFilePermission
_reboot
}
# MARK: Main
# ---------------------------
echo "Starting postinstall script..."
_main

View File

@@ -1,80 +0,0 @@
#!/bin/zsh --no-rcs
# ------------------------------------------------------
# AutoPkg Assets Preinstall Script
# ------------------------------------------------------
# Remove old files, and prepare directories.
# ------------------------------------------------------
# MARK: PackageKit Parameters
# ---------------------------
pathToScript=$0 # ex. /tmp/PKInstallSandbox.*/Scripts/*/preinstall
pathToPackage=$1 # ex. ~/Downloads/Installer.pkg
pathToTargetLocation=$2 # ex. '/', '/Applications', etc (depends on pkgbuild's '--install-location' argument)
pathToTargetVolume=$3 # ex. '/', '/Volumes/MyVolume', etc
pathToStartupDisk=$4 # ex. '/'
# MARK: Variables
# ---------------------------
filesToRemove=(
"Applications/OpenCore-Patcher.app"
"Library/Application Support/Dortania/Update.plist"
"Library/Application Support/Dortania/OpenCore-Patcher.app"
"Library/LaunchAgents/com.dortania.opencore-legacy-patcher.auto-patch.plist"
"Library/PrivilegedHelperTools/com.dortania.opencore-legacy-patcher.privileged-helper"
)
# MARK: Functions
# ---------------------------
function _removeFile() {
local file=$1
if [[ ! -e $file ]]; then
# Check if file is a symbolic link
if [[ -L $file ]]; then
echo "Removing symbolic link: $file"
/bin/rm -f $file
fi
return
fi
echo "Removing file: $file"
# Check if file is a directory
if [[ -d $file ]]; then
/bin/rm -rf $file
else
/bin/rm -f $file
fi
}
function _createParentDirectory() {
local file=$1
local parentDirectory="$(/usr/bin/dirname $file)"
# Check if parent directory exists
if [[ ! -d $parentDirectory ]]; then
echo "Creating parent directory: $parentDirectory"
/bin/mkdir -p $parentDirectory
fi
}
function _main() {
for file in $filesToRemove; do
_removeFile $pathToTargetVolume/$file
_createParentDirectory $pathToTargetVolume/$file
done
}
# MARK: Main
# ---------------------------
echo "Starting preinstall script..."
_main

View File

@@ -5,7 +5,7 @@ import subprocess
from pathlib import Path from pathlib import Path
from opencore_legacy_patcher import constants from opencore_legacy_patcher.volume import generate_copy_arguments
from opencore_legacy_patcher.support import subprocess_wrapper from opencore_legacy_patcher.support import subprocess_wrapper
@@ -158,7 +158,7 @@ class GenerateApplication:
print("Embedding resources") print("Embedding resources")
for file in Path("payloads/Icon/AppIcons").glob("*.icns"): for file in Path("payloads/Icon/AppIcons").glob("*.icns"):
subprocess_wrapper.run_and_verify( subprocess_wrapper.run_and_verify(
["/bin/cp", str(file), self._application_output / "Contents" / "Resources/"], generate_copy_arguments(str(file), self._application_output / "Contents" / "Resources/"),
stdout=subprocess.PIPE, stderr=subprocess.PIPE stdout=subprocess.PIPE, stderr=subprocess.PIPE
) )

View File

@@ -78,6 +78,7 @@ class GenerateDiskImages:
'-format', 'UDZO', '-ov', '-format', 'UDZO', '-ov',
'-volname', 'OpenCore Patcher Resources (Base)', '-volname', 'OpenCore Patcher Resources (Base)',
'-fs', 'HFS+', '-fs', 'HFS+',
'-layout', 'NONE',
'-srcfolder', './payloads', '-srcfolder', './payloads',
'-passphrase', 'password', '-encryption' '-passphrase', 'password', '-encryption'
], stdout=subprocess.PIPE, stderr=subprocess.PIPE) ], stdout=subprocess.PIPE, stderr=subprocess.PIPE)

View File

@@ -2,9 +2,13 @@
package.py: Generate packages (Installer, Uninstaller, AutoPkg-Assets) package.py: Generate packages (Installer, Uninstaller, AutoPkg-Assets)
""" """
import tempfile
import macos_pkg_builder import macos_pkg_builder
from opencore_legacy_patcher import constants from opencore_legacy_patcher import constants
from .package_scripts import GenerateScripts
class GeneratePackage: class GeneratePackage:
""" """
@@ -63,48 +67,82 @@ class GeneratePackage:
return _welcome return _welcome
def _generate_autopkg_welcome(self) -> str:
"""
Generate Welcome message for AutoPkg-Assets PKG
"""
_welcome = ""
_welcome += "# DO NOT RUN AUTOPKG-ASSETS MANUALLY!\n\n"
_welcome += "## THIS CAN BREAK YOUR SYSTEM'S INSTALL!\n\n"
_welcome += "This package should only ever be invoked by the Patcher itself, never downloaded or run by the user. Download the OpenCore-Patcher.pkg on the Github Repository.\n\n"
_welcome += f"[OpenCore Legacy Patcher GitHub Release]({constants.Constants().repo_link})"
return _welcome
def generate(self) -> None: def generate(self) -> None:
""" """
Generate OpenCore-Patcher.pkg Generate OpenCore-Patcher.pkg
""" """
print("Generating OpenCore-Patcher-Uninstaller.pkg") print("Generating OpenCore-Patcher-Uninstaller.pkg")
_tmp_uninstall = tempfile.NamedTemporaryFile(delete=False)
with open(_tmp_uninstall.name, "w") as f:
f.write(GenerateScripts().uninstall())
assert macos_pkg_builder.Packages( assert macos_pkg_builder.Packages(
pkg_output="./dist/OpenCore-Patcher-Uninstaller.pkg", pkg_output="./dist/OpenCore-Patcher-Uninstaller.pkg",
pkg_bundle_id="com.dortania.opencore-legacy-patcher-uninstaller", pkg_bundle_id="com.dortania.opencore-legacy-patcher-uninstaller",
pkg_version=constants.Constants().patcher_version, pkg_version=constants.Constants().patcher_version,
pkg_background="./ci_tooling/installation_pkg/PkgBackgroundUninstaller.png", pkg_background="./ci_tooling/pkg_assets/PkgBackground-Uninstaller.png",
pkg_preinstall_script="./ci_tooling/installation_pkg/uninstall.sh", pkg_preinstall_script=_tmp_uninstall.name,
pkg_as_distribution=True, pkg_as_distribution=True,
pkg_title="OpenCore Legacy Patcher Uninstaller", pkg_title="OpenCore Legacy Patcher Uninstaller",
pkg_welcome=self._generate_uninstaller_welcome(), pkg_welcome=self._generate_uninstaller_welcome(),
).build() is True ).build() is True
print("Generating OpenCore-Patcher.pkg") print("Generating OpenCore-Patcher.pkg")
_tmp_pkg_preinstall = tempfile.NamedTemporaryFile(delete=False)
_tmp_pkg_postinstall = tempfile.NamedTemporaryFile(delete=False)
with open(_tmp_pkg_preinstall.name, "w") as f:
f.write(GenerateScripts().preinstall_pkg())
with open(_tmp_pkg_postinstall.name, "w") as f:
f.write(GenerateScripts().postinstall_pkg())
assert macos_pkg_builder.Packages( assert macos_pkg_builder.Packages(
pkg_output="./dist/OpenCore-Patcher.pkg", pkg_output="./dist/OpenCore-Patcher.pkg",
pkg_bundle_id="com.dortania.opencore-legacy-patcher", pkg_bundle_id="com.dortania.opencore-legacy-patcher",
pkg_version=constants.Constants().patcher_version, pkg_version=constants.Constants().patcher_version,
pkg_allow_relocation=False, pkg_allow_relocation=False,
pkg_as_distribution=True, pkg_as_distribution=True,
pkg_background="./ci_tooling/installation_pkg/PkgBackground.png", pkg_background="./ci_tooling/pkg_assets/PkgBackground-Installer.png",
pkg_preinstall_script="./ci_tooling/installation_pkg/preinstall.sh", pkg_preinstall_script=_tmp_pkg_preinstall.name,
pkg_postinstall_script="./ci_tooling/installation_pkg/postinstall.sh", pkg_postinstall_script=_tmp_pkg_postinstall.name,
pkg_file_structure=self._files, pkg_file_structure=self._files,
pkg_title="OpenCore Legacy Patcher", pkg_title="OpenCore Legacy Patcher",
pkg_welcome=self._generate_installer_welcome(), pkg_welcome=self._generate_installer_welcome(),
).build() is True ).build() is True
print("Generating AutoPkg-Assets.pkg") print("Generating AutoPkg-Assets.pkg")
_tmp_auto_pkg_preinstall = tempfile.NamedTemporaryFile(delete=False)
_tmp_auto_pkg_postinstall = tempfile.NamedTemporaryFile(delete=False)
with open(_tmp_auto_pkg_preinstall.name, "w") as f:
f.write(GenerateScripts().preinstall_autopkg())
with open(_tmp_auto_pkg_postinstall.name, "w") as f:
f.write(GenerateScripts().postinstall_autopkg())
assert macos_pkg_builder.Packages( assert macos_pkg_builder.Packages(
pkg_output="./dist/AutoPkg-Assets.pkg", pkg_output="./dist/AutoPkg-Assets.pkg",
pkg_bundle_id="com.dortania.pkg.AutoPkg-Assets", pkg_bundle_id="com.dortania.pkg.AutoPkg-Assets",
pkg_version=constants.Constants().patcher_version, pkg_version=constants.Constants().patcher_version,
pkg_allow_relocation=False, pkg_allow_relocation=False,
pkg_as_distribution=True, pkg_as_distribution=True,
pkg_background="./ci_tooling/autopkg/PkgBackground.png", pkg_background="./ci_tooling/pkg_assets/PkgBackground-AutoPkg.png",
pkg_preinstall_script="./ci_tooling/autopkg/preinstall.sh", pkg_preinstall_script=_tmp_auto_pkg_preinstall.name,
pkg_postinstall_script="./ci_tooling/autopkg/postinstall.sh", pkg_postinstall_script=_tmp_auto_pkg_postinstall.name,
pkg_file_structure=self._autopkg_files, pkg_file_structure=self._autopkg_files,
pkg_title="AutoPkg Assets", pkg_title="AutoPkg Assets",
pkg_welcome="# DO NOT RUN AUTOPKG-ASSETS MANUALLY!\n\n## THIS CAN BREAK YOUR SYSTEM'S INSTALL!\n\nThis package should only ever be invoked by the Patcher itself, never downloaded or run by the user. Download the OpenCore-Patcher.pkg on the Github Repository.\n\n[OpenCore Legacy Patcher GitHub Release](https://github.com/dortania/OpenCore-Legacy-Patcher/releases/)", pkg_welcome=self._generate_autopkg_welcome(),
).build() is True ).build() is True

View File

@@ -0,0 +1,556 @@
"""
package_scripts.py: Generate pre/postinstall scripts for PKG and AutoPkg
"""
class ZSHFunctions:
def __init__(self) -> None:
pass
def generate_standard_pkg_parameters(self) -> str:
"""
ZSH variables for standard PackageKit parameters
"""
_script = ""
_script += "# MARK: PackageKit Parameters\n"
_script += "# " + "-" * 27 + "\n\n"
_script += "pathToScript=$0 # ex. /tmp/PKInstallSandbox.*/Scripts/*/preinstall\n"
_script += "pathToPackage=$1 # ex. ~/Downloads/Installer.pkg\n"
_script += "pathToTargetLocation=$2 # ex. '/', '/Applications', etc (depends on pkgbuild's '--install-location' argument)\n"
_script += "pathToTargetVolume=$3 # ex. '/', '/Volumes/MyVolume', etc\n"
_script += "pathToStartupDisk=$4 # ex. '/'\n"
return _script
def generate_script_remove_file(self) -> str:
"""
ZSH function to remove files
"""
_script = ""
_script += "function _removeFile() {\n"
_script += " local file=$1\n\n"
_script += " if [[ ! -e $file ]]; then\n"
_script += " # Check if file is a symbolic link\n"
_script += " if [[ -L $file ]]; then\n"
_script += " echo \"Removing symbolic link: $file\"\n"
_script += " /bin/rm -f $file\n"
_script += " fi\n"
_script += " return\n"
_script += " fi\n\n"
_script += " echo \"Removing file: $file\"\n\n"
_script += " # Check if file is a directory\n"
_script += " if [[ -d $file ]]; then\n"
_script += " /bin/rm -rf $file\n"
_script += " else\n"
_script += " /bin/rm -f $file\n"
_script += " fi\n"
_script += "}\n"
return _script
def generate_script_create_parent_directory(self) -> str:
"""
ZSH function to create parent directory
"""
_script = ""
_script += "function _createParentDirectory() {\n"
_script += " local file=$1\n\n"
_script += " local parentDirectory=\"$(/usr/bin/dirname $file)\"\n\n"
_script += " # Check if parent directory exists\n"
_script += " if [[ ! -d $parentDirectory ]]; then\n"
_script += " echo \"Creating parent directory: $parentDirectory\"\n"
_script += " /bin/mkdir -p $parentDirectory\n"
_script += " fi\n"
_script += "}\n"
return _script
def generate_set_suid_bit(self) -> str:
"""
ZSH function to set SUID bit
"""
_script = ""
_script += "function _setSUIDBit() {\n"
_script += " local binaryPath=$1\n\n"
_script += " echo \"Setting SUID bit on: $binaryPath\"\n\n"
_script += " # Check if path is a directory\n"
_script += " if [[ -d $binaryPath ]]; then\n"
_script += " /bin/chmod -R +s $binaryPath\n"
_script += " else\n"
_script += " /bin/chmod +s $binaryPath\n"
_script += " fi\n"
_script += "}\n"
return _script
def generate_create_alias(self) -> str:
"""
ZSH function to create alias
"""
_script = ""
_script += "function _createAlias() {\n"
_script += " local mainPath=$1\n"
_script += " local aliasPath=$2\n\n"
_script += " # Check if alias path exists\n"
_script += " if [[ -e $aliasPath ]]; then\n"
_script += " # Check if alias path is a symbolic link\n"
_script += " if [[ -L $aliasPath ]]; then\n"
_script += " echo \"Removing old symbolic link: $aliasPath\"\n"
_script += " /bin/rm -f $aliasPath\n"
_script += " else\n"
_script += " echo \"Removing old file: $aliasPath\"\n"
_script += " /bin/rm -rf $aliasPath\n"
_script += " fi\n"
_script += " fi\n\n"
_script += " # Create symbolic link\n"
_script += " echo \"Creating symbolic link: $aliasPath\"\n"
_script += " /bin/ln -s $mainPath $aliasPath\n"
_script += "}\n"
return _script
def generate_start_patching(self) -> str:
"""
ZSH function to start patching
"""
_script = ""
_script += "function _startPatching() {\n"
_script += " local executable=$1\n"
_script += " local logPath=$(_logFile)\n\n"
_script += " # Start patching\n"
_script += " \"$executable\" \"--patch_sys_vol\" &> $logPath\n"
_script += "}\n"
return _script
def generate_log_file(self) -> str:
"""
ZSH function to generate log file
"""
_script = ""
_script += "function _logFile() {\n"
_script += " echo \"/Users/Shared/.OCLP-AutoPatcher-Log-$(/bin/date +\"%Y_%m_%d_%I_%M_%p\").txt\"\n"
_script += "}\n"
return _script
def generate_fix_settings_file_permission(self) -> str:
"""
ZSH function to fix settings file permission
"""
_script = ""
_script += "function _fixSettingsFilePermission() {\n"
_script += " local settingsPath=\"$pathToTargetVolume/Users/Shared/.com.dortania.opencore-legacy-patcher.plist\"\n\n"
_script += " if [[ -e $settingsPath ]]; then\n"
_script += " echo \"Fixing settings file permissions: $settingsPath\"\n"
_script += " /bin/chmod 666 $settingsPath\n"
_script += " fi\n"
_script += "}\n"
return _script
def generate_reboot(self) -> str:
"""
ZSH function to reboot
"""
_script = ""
_script += "function _reboot() {\n"
_script += " /sbin/reboot\n"
_script += "}\n"
return _script
def generate_prewarm_gatekeeper(self) -> str:
"""
ZSH function to prewarm Gatekeeper
"""
_script = ""
_script += "function _prewarmGatekeeper() {\n"
_script += " local appPath=$1\n\n"
_script += " # Check if /usr/bin/gktool exists\n"
_script += " if [[ ! -e /usr/bin/gktool ]]; then\n"
_script += " echo \"Host doesn't support Gatekeeper prewarming, skipping...\"\n"
_script += " return\n"
_script += " fi\n\n"
_script += " echo \"Prewarming Gatekeeper for application: $appPath\"\n"
_script += " /usr/bin/gktool scan $appPath\n"
_script += "}\n"
return _script
def generate_clean_launch_service(self) -> str:
"""
ZSH function to clean Launch Service
"""
_script = ""
_script += "function _cleanLaunchService() {\n"
_script += " local domain=\"com.dortania.opencore-legacy-patcher\"\n\n"
_script += " # Iterate over launch agents and daemons\n"
_script += " for launchServiceVariant in \"$pathToTargetVolume/Library/LaunchAgents\" \"$pathToTargetVolume/Library/LaunchDaemons\"; do\n"
_script += " # Check if directory exists\n"
_script += " if [[ ! -d $launchServiceVariant ]]; then\n"
_script += " continue\n"
_script += " fi\n\n"
_script += " # Iterate over launch service files\n"
_script += " for launchServiceFile in $(/bin/ls -1 $launchServiceVariant | /usr/bin/grep $domain); do\n"
_script += " local launchServicePath=\"$launchServiceVariant/$launchServiceFile\"\n\n"
_script += " # Remove launch service file\n"
_script += " _removeFile $launchServicePath\n"
_script += " done\n"
_script += " done\n"
_script += "}\n"
return _script
def generate_preinstall_main(self) -> str:
"""
ZSH function for preinstall's main
"""
_script = ""
_script += "function _main() {\n"
_script += " for file in $filesToRemove; do\n"
_script += " _removeFile $pathToTargetVolume/$file\n"
_script += " _createParentDirectory $pathToTargetVolume/$file\n"
_script += " done\n"
_script += "}\n"
return _script
def generate_postinstall_main(self, is_autopkg: bool = False) -> str:
"""
ZSH function for postinstall's main
"""
_script = ""
_script += "function _main() {\n"
_script += " _setSUIDBit \"$pathToTargetVolume/$helperPath\"\n"
_script += " _createAlias \"$pathToTargetVolume/$mainAppPath\" \"$pathToTargetVolume/$shimAppPath\"\n"
_script += " _prewarmGatekeeper \"$pathToTargetVolume/$mainAppPath\"\n"
if is_autopkg:
_script += " _startPatching \"$pathToTargetVolume/$executablePath\"\n"
_script += " _fixSettingsFilePermission\n"
_script += " _reboot\n"
_script += "}\n"
return _script
def generate_uninstall_main(self) -> str:
"""
ZSH function for uninstall's main
"""
_script = ""
_script += "function _main() {\n"
_script += " _cleanLaunchService\n"
_script += " for file in $filesToRemove; do\n"
_script += " _removeFile $pathToTargetVolume/$file\n"
_script += " done\n"
_script += "}\n"
return _script
class GenerateScripts:
def __init__(self):
self.zsh_functions = ZSHFunctions()
self.files = [
"Applications/OpenCore-Patcher.app",
"Library/Application Support/Dortania/Update.plist",
"Library/Application Support/Dortania/OpenCore-Patcher.app",
"Library/PrivilegedHelperTools/com.dortania.opencore-legacy-patcher.privileged-helper"
]
self.additional_auto_pkg_files = [
"Library/LaunchAgents/com.dortania.opencore-legacy-patcher.auto-patch.plist"
]
def __generate_shebang(self) -> str:
"""
Standard shebang for ZSH
"""
return "#!/bin/zsh --no-rcs\n"
def _generate_header_bar(self) -> str:
"""
# ------------------------------------------------------
"""
return "# " + "-" * 54 + "\n"
def _generate_label_bar(self) -> str:
"""
# ------------------------------
"""
return "# " + "-" * 27 + "\n"
def _generate_preinstall_script(self, is_autopkg: bool = False) -> str:
"""
Generate preinstall script for PKG
"""
_script = ""
_script += self.__generate_shebang()
_script += self._generate_header_bar()
_script += f"# {'AutoPkg Assets' if is_autopkg else 'OpenCore Legacy Patcher'} Preinstall Script\n"
_script += self._generate_header_bar()
_script += "# Remove old files, and prepare directories.\n"
_script += self._generate_header_bar()
_script += "\n\n"
_script += self.zsh_functions.generate_standard_pkg_parameters()
_script += "\n\n"
_script += "# MARK: Variables\n"
_script += self._generate_label_bar()
_script += "\n"
_files = self.files
if is_autopkg:
_files += self.additional_auto_pkg_files
_script += f"filesToRemove=(\n"
for _file in _files:
_script += f" \"{_file}\"\n"
_script += ")\n"
_script += "\n\n"
_script += "# MARK: Functions\n"
_script += self._generate_label_bar()
_script += "\n"
_script += self.zsh_functions.generate_script_remove_file()
_script += "\n"
_script += self.zsh_functions.generate_script_create_parent_directory()
_script += "\n"
_script += self.zsh_functions.generate_preinstall_main()
_script += "\n\n"
_script += "# MARK: Main\n"
_script += self._generate_label_bar()
_script += "\n"
_script += "echo \"Starting preinstall script...\"\n"
_script += "_main\n"
return _script
def _generate_postinstall_script(self, is_autopkg: bool = False) -> str:
"""
"""
_script = ""
_script += self.__generate_shebang()
_script += self._generate_header_bar()
_script += f"# {'AutoPkg Assets' if is_autopkg else 'OpenCore Legacy Patcher'} Post Install Script\n"
_script += self._generate_header_bar()
if is_autopkg:
_script += "# Set UID, create alias, start patching, and reboot.\n"
else:
_script += "# Set SUID bit on helper tool, and create app alias.\n"
_script += self._generate_header_bar()
_script += "\n\n"
_script += self.zsh_functions.generate_standard_pkg_parameters()
_script += "\n\n"
_script += "# MARK: Variables\n"
_script += self._generate_label_bar()
_script += "\n"
_script += "helperPath=\"Library/PrivilegedHelperTools/com.dortania.opencore-legacy-patcher.privileged-helper\"\n"
_script += "mainAppPath=\"Library/Application Support/Dortania/OpenCore-Patcher.app\"\n"
_script += "shimAppPath=\"Applications/OpenCore-Patcher.app\"\n"
if is_autopkg:
_script += "executablePath=\"$mainAppPath/Contents/MacOS/OpenCore-Patcher\"\n"
_script += "\n\n"
_script += "# MARK: Functions\n"
_script += self._generate_label_bar()
_script += "\n"
_script += self.zsh_functions.generate_set_suid_bit()
_script += "\n"
_script += self.zsh_functions.generate_create_alias()
_script += "\n"
_script += self.zsh_functions.generate_prewarm_gatekeeper()
_script += "\n"
if is_autopkg:
_script += self.zsh_functions.generate_start_patching()
_script += "\n"
_script += self.zsh_functions.generate_log_file()
_script += "\n"
_script += self.zsh_functions.generate_fix_settings_file_permission()
_script += "\n"
_script += self.zsh_functions.generate_reboot()
_script += "\n"
_script += self.zsh_functions.generate_postinstall_main(is_autopkg)
_script += "\n\n"
_script += "# MARK: Main\n"
_script += self._generate_label_bar()
_script += "\n"
_script += "echo \"Starting postinstall script...\"\n"
_script += "_main\n"
return _script
def _generate_uninstall_script(self) -> str:
"""
"""
_script = ""
_script += self.__generate_shebang()
_script += self._generate_header_bar()
_script += f"# OpenCore Legacy Patcher Uninstall Script\n"
_script += self._generate_header_bar()
_script += "# Remove OpenCore Legacy Patcher files and directories.\n"
_script += self._generate_header_bar()
_script += "\n\n"
_script += self.zsh_functions.generate_standard_pkg_parameters()
_script += "\n\n"
_script += "# MARK: Variables\n"
_script += self._generate_label_bar()
_script += "\n"
_files = self.files
_script += "filesToRemove=(\n"
for _file in _files:
_script += f" \"{_file}\"\n"
_script += ")\n"
_script += "\n\n"
_script += "# MARK: Functions\n"
_script += self._generate_label_bar()
_script += "\n"
_script += self.zsh_functions.generate_script_remove_file()
_script += "\n"
_script += self.zsh_functions.generate_clean_launch_service()
_script += "\n"
_script += self.zsh_functions.generate_uninstall_main()
_script += "\n\n"
_script += "# MARK: Main\n"
_script += self._generate_label_bar()
_script += "\n"
_script += "echo \"Starting uninstall script...\"\n"
_script += "_main\n"
return _script
def preinstall_pkg(self) -> str:
"""
Generate preinstall script for PKG
"""
return self._generate_preinstall_script()
def preinstall_autopkg(self) -> str:
"""
Generate preinstall script for AutoPkg
"""
return self._generate_preinstall_script(is_autopkg=True)
def postinstall_pkg(self) -> str:
"""
Generate postinstall script for PKG
"""
return self._generate_postinstall_script()
def postinstall_autopkg(self) -> str:
"""
Generate postinstall script for AutoPkg
"""
return self._generate_postinstall_script(is_autopkg=True)
def uninstall(self) -> str:
"""
Generate uninstall script
"""
return self._generate_uninstall_script()

View File

@@ -4,6 +4,7 @@ shim.py: Generate Update Shim
from pathlib import Path from pathlib import Path
from opencore_legacy_patcher.volume import generate_copy_arguments
from opencore_legacy_patcher.support import subprocess_wrapper from opencore_legacy_patcher.support import subprocess_wrapper
@@ -25,9 +26,9 @@ class GenerateShim:
if Path(self._shim_pkg).exists(): if Path(self._shim_pkg).exists():
Path(self._shim_pkg).unlink() Path(self._shim_pkg).unlink()
subprocess_wrapper.run_and_verify(["/bin/cp", "-R", self._build_pkg, self._shim_pkg]) subprocess_wrapper.run_and_verify(generate_copy_arguments(self._build_pkg, self._shim_pkg))
if Path(self._output_shim).exists(): if Path(self._output_shim).exists():
Path(self._output_shim).unlink() Path(self._output_shim).unlink()
subprocess_wrapper.run_and_verify(["/bin/cp", "-R", self._shim_path, self._output_shim]) subprocess_wrapper.run_and_verify(generate_copy_arguments(self._shim_path, self._output_shim))

View File

@@ -1,74 +0,0 @@
#!/bin/zsh --no-rcs
# ------------------------------------------------------
# OpenCore Legacy Patcher PKG Post Install Script
# ------------------------------------------------------
# Set SUID bit on helper tool, and create app alias.
# ------------------------------------------------------
# MARK: PackageKit Parameters
# ---------------------------
pathToScript=$0 # ex. /tmp/PKInstallSandbox.*/Scripts/*/preinstall
pathToPackage=$1 # ex. ~/Downloads/Installer.pkg
pathToTargetLocation=$2 # ex. '/', '/Applications', etc (depends on pkgbuild's '--install-location' argument)
pathToTargetVolume=$3 # ex. '/', '/Volumes/MyVolume', etc
pathToStartupDisk=$4 # ex. '/'
# MARK: Variables
# ---------------------------
helperPath="Library/PrivilegedHelperTools/com.dortania.opencore-legacy-patcher.privileged-helper"
mainAppPath="Library/Application Support/Dortania/OpenCore-Patcher.app"
shimAppPath="Applications/OpenCore-Patcher.app"
# MARK: Functions
# ---------------------------
function _setSUIDBit() {
local binaryPath=$1
echo "Setting SUID bit on: $binaryPath"
# Check if path is a directory
if [[ -d $binaryPath ]]; then
/bin/chmod -R +s $binaryPath
else
/bin/chmod +s $binaryPath
fi
}
function _createAlias() {
local mainPath=$1
local aliasPath=$2
# Check if alias path exists
if [[ -e $aliasPath ]]; then
# Check if alias path is a symbolic link
if [[ -L $aliasPath ]]; then
echo "Removing old symbolic link: $aliasPath"
/bin/rm -f $aliasPath
else
echo "Removing old file: $aliasPath"
/bin/rm -rf $aliasPath
fi
fi
# Create symbolic link
echo "Creating symbolic link: $aliasPath"
/bin/ln -s $mainPath $aliasPath
}
function _main() {
_setSUIDBit "$pathToTargetVolume/$helperPath"
_createAlias "$pathToTargetVolume/$mainAppPath" "$pathToTargetVolume/$shimAppPath"
}
# MARK: Main
# ---------------------------
echo "Starting postinstall script..."
_main

View File

@@ -1,79 +0,0 @@
#!/bin/zsh --no-rcs
# ------------------------------------------------------
# OpenCore Legacy Patcher PKG Preinstall Script
# ------------------------------------------------------
# Remove old files, and prepare directories.
# ------------------------------------------------------
# MARK: PackageKit Parameters
# ---------------------------
pathToScript=$0 # ex. /tmp/PKInstallSandbox.*/Scripts/*/preinstall
pathToPackage=$1 # ex. ~/Downloads/Installer.pkg
pathToTargetLocation=$2 # ex. '/', '/Applications', etc (depends on pkgbuild's '--install-location' argument)
pathToTargetVolume=$3 # ex. '/', '/Volumes/MyVolume', etc
pathToStartupDisk=$4 # ex. '/'
# MARK: Variables
# ---------------------------
filesToRemove=(
"Applications/OpenCore-Patcher.app"
"Library/Application Support/Dortania/Update.plist"
"Library/Application Support/Dortania/OpenCore-Patcher.app"
"Library/PrivilegedHelperTools/com.dortania.opencore-legacy-patcher.privileged-helper"
)
# MARK: Functions
# ---------------------------
function _removeFile() {
local file=$1
if [[ ! -e $file ]]; then
# Check if file is a symbolic link
if [[ -L $file ]]; then
echo "Removing symbolic link: $file"
/bin/rm -f $file
fi
return
fi
echo "Removing file: $file"
# Check if file is a directory
if [[ -d $file ]]; then
/bin/rm -rf $file
else
/bin/rm -f $file
fi
}
function _createParentDirectory() {
local file=$1
local parentDirectory="$(/usr/bin/dirname $file)"
# Check if parent directory exists
if [[ ! -d $parentDirectory ]]; then
echo "Creating parent directory: $parentDirectory"
/bin/mkdir -p $parentDirectory
fi
}
function _main() {
for file in $filesToRemove; do
_removeFile $pathToTargetVolume/$file
_createParentDirectory $pathToTargetVolume/$file
done
}
# MARK: Main
# ---------------------------
echo "Starting preinstall script..."
_main

View File

@@ -1,85 +0,0 @@
#!/bin/zsh --no-rcs
# ------------------------------------------------------
# OpenCore Legacy Patcher PKG Uninstall Script
# ------------------------------------------------------
# MARK: PackageKit Parameters
# ---------------------------
pathToScript=$0 # ex. /tmp/PKInstallSandbox.*/Scripts/*/preinstall
pathToPackage=$1 # ex. ~/Downloads/Installer.pkg
pathToTargetLocation=$2 # ex. '/', '/Applications', etc (depends on pkgbuild's '--install-location' argument)
pathToTargetVolume=$3 # ex. '/', '/Volumes/MyVolume', etc
pathToStartupDisk=$4 # ex. '/'
# MARK: Variables
# ---------------------------
filesToRemove=(
"Applications/OpenCore-Patcher.app"
"Library/Application Support/Dortania/Update.plist"
"Library/Application Support/Dortania/OpenCore-Patcher.app"
"Library/PrivilegedHelperTools/com.dortania.opencore-legacy-patcher.privileged-helper"
)
# MARK: Functions
# ---------------------------
function _removeFile() {
local file=$1
if [[ ! -e $file ]]; then
# Check if file is a symbolic link
if [[ -L $file ]]; then
echo "Removing symbolic link: $file"
/bin/rm -f $file
fi
return
fi
echo "Removing file: $file"
# Check if file is a directory
if [[ -d $file ]]; then
/bin/rm -rf $file
else
/bin/rm -f $file
fi
}
function _cleanLaunchService() {
local domain="com.dortania.opencore-legacy-patcher"
# Iterate over launch agents and daemons
for launchServiceVariant in "$pathToTargetVolume/Library/LaunchAgents" "$pathToTargetVolume/Library/LaunchDaemons"; do
# Check if directory exists
if [[ ! -d $launchServiceVariant ]]; then
continue
fi
# Iterate over launch service files
for launchServiceFile in $(/bin/ls -1 $launchServiceVariant | /usr/bin/grep $domain); do
local launchServicePath="$launchServiceVariant/$launchServiceFile"
# Remove launch service file
_removeFile $launchServicePath
done
done
}
function _main() {
_cleanLaunchService
for file in $filesToRemove; do
_removeFile "$pathToTargetVolume/$file"
done
}
# MARK: Main
# ---------------------------
echo "Starting uninstall script..."
_main

View File

Before

Width:  |  Height:  |  Size: 156 KiB

After

Width:  |  Height:  |  Size: 156 KiB

View File

Before

Width:  |  Height:  |  Size: 320 KiB

After

Width:  |  Height:  |  Size: 320 KiB

View File

Before

Width:  |  Height:  |  Size: 160 KiB

After

Width:  |  Height:  |  Size: 160 KiB

View File

@@ -131,6 +131,7 @@ module.exports = {
'ICNS', 'ICNS',
'WINDOWS', 'WINDOWS',
'UNIVERSALCONTROL', 'UNIVERSALCONTROL',
'PROCESS',
] ]
}, },
{ {

View File

@@ -1,9 +1,25 @@
# Supported Models # Supported Models
### Application requirements
The patcher application requires **OS X Yosemite 10.10** or later to run.
* **OS X El Capitan 10.11** or later is required to make installers for macOS Ventura and later.
The patcher is designed to target **macOS Big Sur 11.x to macOS Sonoma 14.x**.
* Other versions may work, albeit in a broken state. No support is provided for any version outside of the above.
-------
Any Intel-based Mac listed below can install and make use of OpenCore Legacy Patcher. To check your hardware model, open System Information and look for the `Model Identifier` key. Any Intel-based Mac listed below can install and make use of OpenCore Legacy Patcher. To check your hardware model, open System Information and look for the `Model Identifier` key.
* This applies even if Apple supports the model natively. * This applies even if Apple supports the model natively.
* OpenCore Legacy Patcher does not support PowerPC- or Apple Silicon-based Macs. * OpenCore Legacy Patcher does not support PowerPC- or Apple Silicon-based Macs.
* If your model is not listed below, it is not supported by this patcher. * If your model is not listed below, it is not supported by this patcher.
::: warning Note
It is **extremely recommended** to update your Mac to its latest native version before using OpenCore Legacy Patcher, to ensure you're on the highest firmware.
:::
The below tables can be used to reference issues with a particular model, and see which OS would work best on your machine. The below tables can be used to reference issues with a particular model, and see which OS would work best on your machine.
* [MacBook](#macbook) * [MacBook](#macbook)
* [MacBook Air](#macbook-air) * [MacBook Air](#macbook-air)
@@ -13,14 +29,6 @@ The below tables can be used to reference issues with a particular model, and se
* [Mac Pro](#mac-pro) * [Mac Pro](#mac-pro)
* [Xserve](#xserve) * [Xserve](#xserve)
::: details OpenCore Patcher application
The patcher application requires **OS X Yosemite 10.10** or later to run.
* **OS X El Capitan 10.11** or later is required to make installers for macOS Ventura and later.
The patcher is designed to target **macOS Big Sur 11.x to macOS Sonoma 14.x**.
* Other versions may work, albeit in a broken state. No support is provided for any version outside of the above.
:::
### MacBook ### MacBook
@@ -50,8 +58,8 @@ The patcher is designed to target **macOS Big Sur 11.x to macOS Sonoma 14.x**.
| MacBook Air (11-inch, Early 2015) | `MacBookAir7,1` | ^^ | | MacBook Air (11-inch, Early 2015) | `MacBookAir7,1` | ^^ |
| MacBook Air (13-inch, Early 2015)<br>MacBook Air (13-inch, 2017) | `MacBookAir7,2` | ^^ | | MacBook Air (13-inch, Early 2015)<br>MacBook Air (13-inch, 2017) | `MacBookAir7,2` | ^^ |
| MacBook Air (Retina, 13-inch, 2018) | `MacBookAir8,1` | - Supported by Apple | | MacBook Air (Retina, 13-inch, 2018) | `MacBookAir8,1` | - Supported by Apple |
| MacBook Air (Retina, 13-inch, 2019) | `MacBookAir9,1` | ^^ | | MacBook Air (Retina, 13-inch, 2019) | `MacBookAir8,2` | ^^ |
| MacBook Air (Retina, 13-inch, 2020) | `MacBookAir10,1` | ^^ | | MacBook Air (Retina, 13-inch, 2020) | `MacBookAir9,1` | ^^ |
### MacBook Pro ### MacBook Pro

View File

@@ -2,6 +2,7 @@
* [Booting without USB drive](#booting-without-usb-drive) * [Booting without USB drive](#booting-without-usb-drive)
* [Booting seamlessly without Boot Picker](#booting-seamlessly-without-boot-picker) * [Booting seamlessly without Boot Picker](#booting-seamlessly-without-boot-picker)
* [SIP settings](#sip-settings)
* [Applying Post Install Volume Patches](#applying-post-install-volume-patches) * [Applying Post Install Volume Patches](#applying-post-install-volume-patches)
## Booting without USB drive ## Booting without USB drive
@@ -24,23 +25,40 @@ To do this, run the OpenCore Patcher and head to Patcher Settings, then uncheck
Once you've toggled it off, build your OpenCore EFI once again and install to your desired drive. Now to show the OpenCore selector, you can simply hold down the "ESC" key while clicking on EFI boot, and then you can release the "ESC" key when you see the cursor arrow at the top left. Once you've toggled it off, build your OpenCore EFI once again and install to your desired drive. Now to show the OpenCore selector, you can simply hold down the "ESC" key while clicking on EFI boot, and then you can release the "ESC" key when you see the cursor arrow at the top left.
## Enabling SIP ## SIP settings
For many users, SIP will be lowered by default on build. For Intel HD 4000 users, you may have noticed that SIP is partially disabled. This is to ensure full compatibility with macOS Monterey and allow seamless booting between it and older OSes. However for users who do not plan to boot Monterey, you can re-enable under Patcher Settings. SIP, or System Integrity Protection, needs to be lowered on systems where root patching is required to patch data on disk. This will vary between OS versions and the model in question. OCLP by default will determine the proper SIP options for the OS version and Mac model, in most cases the user has no need to touch these settings. However, this part explains how the SIP settings work in OCLP, where lowered SIP is needed and where full SIP could be enabled.
Note: Machines running macOS Ventura or systems with non-Metal GPUs cannot enable SIP outright, due to having a patched root volume. Enabling it will brick the installation. :::warning
Going forward with 0.6.6, SIP settings can be accessed from the Security tab shown in the images. If you're unsure whether you should change the SIP settings, leave them as-is. Systems where you have already ran the Post Install Root Patching cannot enable SIP without potentially breaking the current install.
:::
SIP settings can be accessed from the Security tab shown in the images. To change SIP settings, make the changes here, return in main menu and rebuild OpenCore using the first option.
| SIP Enabled | SIP Lowered (Root Patching) | SIP Disabled | | SIP Enabled | SIP Lowered (Root Patching) | SIP Disabled |
| :--- | :--- | :--- | | :--- | :--- | :--- |
| ![](./images/OCLP-GUI-Settings-SIP-Enabled.png) | ![](./images/OCLP-GUI-Settings-SIP-Root-Patch.png) | ![](./images/OCLP-GUI-Settings-SIP-Disabled.png) | | ![](./images/OCLP-GUI-Settings-SIP-Enabled.png) | ![](./images/OCLP-GUI-Settings-SIP-Root-Patch.png) | ![](./images/OCLP-GUI-Settings-SIP-Disabled.png) |
:::warning
If you're unsure whether you should enable SIP, leave it as-is. Systems where you have already ran the Post Install Root Patching cannot enable SIP without potentially breaking the current install. In the cases where SIP can be enabled, manually enabling it is needed. Easiest way to check whether you can fully enable SIP is the "Post Install Root Patch" section, if that section tells your system doesn't need patches (or you don't install the patches e.g. in case you don't need WiFi on a Mac Pro with upgraded GPU running Monterey) then it is safe to assume full SIP can be enabled.
::: **Ventura and newer**
All unsupported systems require lowered SIP.
**Monterey**
Majority of unsupported systems from 2013 onward can enable full SIP.
Pre-2012 systems, also known as "non-Metal" (includes Mac Pros without upgraded GPU), as well as NVIDIA Kepler and Intel HD 4000 GPUs require lowered SIP.
Some systems such as Mac Pros also require root patching for stock WiFi cards but if you do not need WiFi or you plan to upgrade the card, there is no need for root patching and as such SIP can be fully enabled.
**Big Sur**
All Metal capable systems from 2012 onward (incl. NVIDIA Kepler and Intel HD 4000) as well as Mac Pros with upgraded GPU can run with full SIP enabled.
Non-Metal systems still require lowered SIP.
## Applying Post Install Volume Patches ## Applying Post Install Volume Patches

19
docs/PROCESS.md Normal file
View File

@@ -0,0 +1,19 @@
# Background process
OpenCore Legacy Patcher utilizes a background process to:
- Check for mismatched configurations and warn the user (e.g. installed MacBookPro11,1 config on MacBookPro11,5)
- Monitor the status of installed Root Patches and OpenCore
- Ask you to install Root Patches in case they aren't detected (typically after an update)
- Check whether OpenCore is being booted from USB drive or internal drive
- Ask you to install OpenCore on the internal disk in case booted from USB
- React to upcoming updates requiring a new KDK to be downloaded, starting KDK download automatically
It is recommended to keep the background process enabled for smoothest functionality. e.g. to try and avoid failed patching when new KDK is not found.
If you decide to disable the background process, the KDK installation for each update has to be done manually. OCLP is also unable to detect Root Patches on boot, meaning manually opening the app and root patching is required.
::: warning Note:
In some cases macOS may report background process being added by "Mykola Grymalyuk", this happens due to a macOS bug where sometimes the developer name who sent the app for notarization is shown instead of the application name.
Dortania cannot do anything about this.
:::

View File

@@ -1,206 +1,254 @@
# Troubleshooting # Troubleshooting
Here are some common errors that users may experience while using this patcher: Here are some common errors that users may experience while using this patcher:
* [OpenCore Legacy Patcher not launching](#opencore-legacy-patcher-not-launching) * [OpenCore Legacy Patcher not launching](#opencore-legacy-patcher-not-launching)
* [Stuck on `This version of Mac OS X is not supported on this platform` or (🚫) Prohibited Symbol](#stuck-on-this-version-of-mac-os-x-is-not-supported-on-this-platform-or-(🚫)-prohibited-symbol) * ["You don't have permission to save..." error when creating USB installer](#you-don-t-have-permission-to-save-error-when-creating-usb-installer)
* [Cannot boot macOS without the USB](#cannot-boot-macos-without-the-usb) * [Stuck on `This version of Mac OS X is not supported on this platform` or (🚫) Prohibited Symbol](#stuck-on-this-version-of-mac-os-x-is-not-supported-on-this-platform-or-🚫-prohibited-symbol)
* [Infinite Recovery OS Booting](#infinite-recovery-os-reboot) * [Cannot boot macOS without the USB](#cannot-boot-macos-without-the-usb)
* [Stuck on boot after root patching](#stuck-on-boot-after-root-patching) * [Infinite Recovery OS Booting](#infinite-recovery-os-reboot)
* [Reboot when entering Hibernation (`Sleep Wake Failure`)](#reboot-when-entering-hibernation-sleep-wake-failure) * [Stuck on boot after root patching](#stuck-on-boot-after-root-patching)
* [How to Boot Recovery through OpenCore Legacy Patcher](#how-to-boot-recovery-through-opencore-legacy-patcher) * ["Unable to resolve dependencies, error code 71" when root patching](#unable-to-resolve-dependencies-error-code-71-when-root-patching)
* [Stuck on "Your Mac needs a firmware update"](#stuck-on-your-mac-needs-a-firmware-update) * [Reboot when entering Hibernation (`Sleep Wake Failure`)](#reboot-when-entering-hibernation-sleep-wake-failure)
* [No Brightness Control](#no-brightness-control) * [How to Boot Recovery through OpenCore Legacy Patcher](#how-to-boot-recovery-through-opencore-legacy-patcher)
* [Cannot connect Wi-Fi on Monterey with legacy cards](#cannot-connect-Wi-Fi-on-Monterey-with-legacy-cards) * [Stuck on "Your Mac needs a firmware update"](#stuck-on-your-mac-needs-a-firmware-update)
* [No Graphics Acceleration](#no-graphics-acceleration) * [No Brightness Control](#no-brightness-control)
* [Black Screen on MacBookPro11,3 in macOS Monterey](#black-screen-on-macbookpro113-in-macos-monterey) * [Cannot connect Wi-Fi on Monterey with legacy cards](#cannot-connect-Wi-Fi-on-Monterey-with-legacy-cards)
* [No DisplayPort Output on Mac Pros with NVIDIA Kepler](#no-displayport-output-on-mac-pros-with-NVIDIA-kepler) * [No Graphics Acceleration](#no-graphics-acceleration)
* [Volume Hash Mismatch Error in macOS Monterey](#volume-hash-mismatch-error-in-macos-monterey) * [Black Screen on MacBookPro11,3 in macOS Monterey](#black-screen-on-macbookpro113-in-macos-monterey)
* [Cannot Disable SIP in recoveryOS](#cannot-disable-sip-in-recoveryos) * [No DisplayPort Output on Mac Pros with NVIDIA Kepler](#no-displayport-output-on-mac-pros-with-NVIDIA-kepler)
* [Stuck on "Less than a minute remaining..."](#stuck-on-less-than-a-minute-remaining) * [Volume Hash Mismatch Error in macOS Monterey](#volume-hash-mismatch-error-in-macos-monterey)
* [No acceleration after a Metal GPU swap on Mac Pro](#no-acceleration-after-a-metal-gpu-swap-on-mac-pro) * [Cannot Disable SIP in recoveryOS](#cannot-disable-sip-in-recoveryos)
* [Keyboard, Mouse and Trackpad not working in installer or after update](#keyboard-mouse-and-trackpad-not-working-in-installer-or-after-update) * [Stuck on "Less than a minute remaining..."](#stuck-on-less-than-a-minute-remaining)
* [No acceleration after a Metal GPU swap on Mac Pro](#no-acceleration-after-a-metal-gpu-swap-on-mac-pro)
* [Keyboard, Mouse and Trackpad not working in installer or after update](#keyboard-mouse-and-trackpad-not-working-in-installer-or-after-update)
## OpenCore Legacy Patcher not launching
If the application won't launch (e.g. icon will bounce in the Dock), try launching OCLP via Terminal by typing the following command, make sure you've moved the app to `/Applications` before this. ## OpenCore Legacy Patcher not launching
```sh If the application won't launch (e.g. icon will bounce in the Dock), try launching OCLP via Terminal by typing the following command, make sure you've moved the app to `/Applications` before this.
/Applications/OpenCore-Patcher.app/Contents/MacOS/OpenCore-Patcher
``` ```sh
/Applications/OpenCore-Patcher.app/Contents/MacOS/OpenCore-Patcher
## Stuck on `This version of Mac OS X is not supported on this platform` or (🚫) Prohibited Symbol ```
This means macOS has detected an SMBIOS it does not support. To resolve this, ensure you're booting OpenCore **before** the macOS installer in the boot picker. Reminder that the option will be called `EFI Boot`. ## "You don't have permission to save..." error when creating USB installer
Once you've booted OpenCore at least once, your hardware should now auto-boot it until either an NVRAM reset occurs, or you remove the drive with OpenCore installed. In some cases, a following error saying "The bless of the installer disk failed" stating the reason as "You don't have permission to save..." may appear.
However, if the 🚫 Symbol only appears after the boot process has already started (the bootscreen appears/verbose boot starts), it could mean that your USB drive has failed to pass macOS' integrity checks. To resolve this, create a new installer using a different USB drive (preferably of a different model.)
<div align="center">
## Cannot boot macOS without the USB <img src="./images/Error-No-Permission-To-Save.png" alt="NoPermissionToSave" width="400" />
</div>
By default, the OpenCore Patcher won't install OpenCore onto the internal drive itself during installs.
After installing macOS, OpenCore Legacy Patcher should automatically prompt you to install OpenCore onto the internal drive. However, if it doesn't show the prompt, you'll need to either [manually transfer](https://dortania.github.io/OpenCore-Post-Install/universal/oc2hdd.html) OpenCore to the internal drive's EFI or Build and Install again and select your internal drive. To resolve this, you may try adding Full Disk Access permission for OpenCore Legacy Patcher. To add it, first go to the settings
Reminder that once this is done, you'll need to select OpenCore in the boot picker again for your hardware to remember this entry and auto boot from then on. * Ventura and Sonoma: Go to System Settings -> Privacy and Security -> Full Disk Access
## Infinite Recovery OS Booting * Big Sur and Monterey: Go to System Preferences -> Security and Privacy -> Full Disk Access
With OpenCore Legacy Patcher, we rely on Apple Secure Boot to ensure OS updates work correctly and reliably with Big Sur. However this installs NVRAM variables that will confuse your Mac if not running with OpenCore. To resolve this, simply uninstall OpenCore and [reset NVRAM](https://support.apple.com/en-mide/HT201255). Enable OpenCore-Patcher in the list. If not found on the list, press the + sign to add a new entity and find OpenCore Legacy Patcher from Applications.
* Note: Machines with modified root volumes will also result in an infinite recovery loop until integrity is restored. Restart OpenCore Legacy Patcher and try creating your USB drive again.
## Stuck on boot after root patching Optional: After you've created your USB drive, you can remove OpenCore Legacy Patcher from Full Disk Access again.
Boot into recovery by pressing space when your disk is selected on the OCLP bootpicker (if you have it hidden, hold ESC while starting up)
## Stuck on `This version of Mac OS X is not supported on this platform` or (🚫) Prohibited Symbol
* **Note:** If your disk name is something else than "Macintosh HD", make sure to change the path accordingly. You can figure out your disk name by typing `ls /Volumes`.
This means macOS has detected an SMBIOS it does not support. To resolve this, ensure you're booting OpenCore **before** the macOS installer in the boot picker. Reminder that the option will be called `EFI Boot`.
Go into terminal and first mount the disk by typing
```sh Once you've booted OpenCore at least once, your hardware should now auto-boot it until either an NVRAM reset occurs, or you remove the drive with OpenCore installed.
mount -uw "/Volumes/Macintosh HD"
``` However, if the 🚫 Symbol only appears after the boot process has already started (the bootscreen appears/verbose boot starts), it could mean that your USB drive has failed to pass macOS' integrity checks. To resolve this, create a new installer using a different USB drive (preferably of a different model.)
Then revert the snapshot
```sh ## Cannot boot macOS without the USB
bless --mount "/Volumes/Macintosh HD" --bootefi --last-sealed-snapshot
``` By default, the OpenCore Patcher won't install OpenCore onto the internal drive itself during installs.
Now we're going to clean the /Library/Extensions folder from offending kexts while keeping needed ones.
After installing macOS, OpenCore Legacy Patcher should automatically prompt you to install OpenCore onto the internal drive. However, if it doesn't show the prompt, you'll need to either [manually transfer](https://dortania.github.io/OpenCore-Post-Install/universal/oc2hdd.html) OpenCore to the internal drive's EFI or Build and Install again and select your internal drive.
Run the following and **make sure to type it carefully**
Reminder that once this is done, you'll need to select OpenCore in the boot picker again for your hardware to remember this entry and auto boot from then on.
::: warning
If you have **FileVault 2 enabled**, you will need to mount the Data volume first. This can be done in Disk Utility by locating your macOS volume name, selecting its Data volume, and selecting the Mount option in the toolbar. ## Infinite Recovery OS Booting
:::
With OpenCore Legacy Patcher, we rely on Apple Secure Boot to ensure OS updates work correctly and reliably with Big Sur. However this installs NVRAM variables that will confuse your Mac if not running with OpenCore. To resolve this, simply uninstall OpenCore and [reset NVRAM](https://support.apple.com/en-mide/HT201255).
```sh
cd "/Volumes/Macintosh HD - Data/Library/Extensions" && ls | grep -v "HighPoint*\|SoftRAID*" | xargs rm -rf * Note: Machines with modified root volumes will also result in an infinite recovery loop until integrity is restored.
```
## Stuck on boot after root patching
Then restart and now your system should be restored to the unpatched snapshot and should be able to boot again.
Boot into recovery by pressing space when your disk is selected on the OCLP bootpicker (if you have it hidden, hold ESC while starting up)
* **Note:** If your disk name is something else than "Macintosh HD", make sure to change the path accordingly. You can figure out your disk name by typing `ls /Volumes`.
## Reboot when entering Hibernation (`Sleep Wake Failure`)
Go into terminal and first mount the disk by typing
[Known issue on some models](https://github.com/dortania/Opencore-Legacy-Patcher/issues/72), a temporary fix is to disable Hibernation by executing the following command in the terminal: ```sh
mount -uw "/Volumes/Macintosh HD"
``` ```
sudo pmset -a hibernatemode 0 Then revert the snapshot
``` ```sh
bless --mount "/Volumes/Macintosh HD" --bootefi --last-sealed-snapshot
## How to Boot Recovery through OpenCore Legacy Patcher ```
Now we're going to clean the /Library/Extensions folder from offending kexts while keeping needed ones.
By default, the patcher will try to hide extra boot options such as recovery from the user. To make them appear, simply press the `Spacebar` key while inside OpenCore's Picker to list all boot options.
Run the following and **make sure to type it carefully**
## Stuck on "Your Mac needs a firmware update"
::: warning
Full error: "Your Mac needs a firmware update in order to install to this Volume. Please select a Mac OS Extended (Journaled) volume instead." If you have **FileVault 2 enabled**, you will need to mount the Data volume first. This can be done in Disk Utility by locating your macOS volume name, selecting its Data volume, and selecting the Mount option in the toolbar.
:::
This error occurs when macOS determines that the current firmware does not have full APFS support. To resolve this, when installing OpenCore, head to "Patcher Settings" and enable "Moderate SMBIOS Patching" or higher. This will ensure that the firmware reported will show support for full APFS capabilities.
```sh
## No Brightness Control cd "/Volumes/Macintosh HD - Data/Library/Extensions" && ls | grep -v "HighPoint*\|SoftRAID*" | xargs rm -rf
```
With OCLP v0.0.22, we've added support for brightness control on many models. However, some users may have noticed that their brightness keys do not work.
Then restart and now your system should be restored to the unpatched snapshot and should be able to boot again.
As a work-around, we recommend users try out the below app:
## "Unable to resolve dependencies, error code 71" when root patching
* [Brightness Slider](https://actproductions.net/free-apps/brightness-slider/)
If you're getting this error, it typically means you have some offending kernel extensions, to fix this you will have to clear them.
## Cannot connect Wi-Fi on Monterey with legacy cards
Semi-automated way:
With OCLP v0.2.5, we've added support for legacy Wi-Fi on Monterey. However, some users may have noticed that they can't connect to wireless networks.
1. Open Terminal
To work-around this, we recommend that users manually connect using the "Other" option in the Wi-Fi menu bar or manually adding the network in the "Network" preference pane. 2. Type `sudo zsh`
3. Type `cd "/Volumes/Macintosh HD/Library/Extensions" && ls | grep -v "HighPoint*\|SoftRAID*" | xargs rm -rf`
## No Graphics Acceleration * Make sure to rename "Macintosh HD" to what your drive name is
4. Run OCLP root patcher again
In macOS, GPU drivers are often dropped from the OS with each major release of it. With macOS Big Sur, currently, all non-Metal GPUs require additional patches to gain acceleration. In addition, macOS Monterey removed Graphics Drivers for both Intel Ivy Bridge and NVIDIA Kepler graphics processors.
Manual way:
If you're using OCLP v0.4.4, you should have been prompted to install Root Volume patches after the first boot from installation of macOS. If you need to do this manually, you can do so within the patcher app. Once rebooted, acceleration will be re-enabled as well as brightness control for laptops.
1. Navigate to /Library/Extensions
## Black Screen on MacBookPro11,3 in macOS Monterey 2. Delete everything **except** HighPointIOP.kext, HighPointRR.kext and SoftRAID.kext
3. Run OCLP root patcher again
Due to Apple dropping NVIDIA Kepler support in macOS Monterey, [MacBookPro11,3's GMUX has difficulties switching back to the iGPU to display macOS correctly.](https://github.com/dortania/OpenCore-Legacy-Patcher/issues/522) To work-around this issue, boot the MacBookPro11,3 in Safe Mode and once macOS is installed, run OCLP's Post Install Root Patches to enable GPU Acceleration for the NVIDIA dGPU.
If there is no success, navigate to "/Library/Developer/KDKs" and delete everything.
* Safe Mode can be started by holding `Shift` + `Enter` when selecting macOS Monterey in OCLP's Boot Menu.
If still no success, type `sudo bless --mount "/Volumes/Macintosh HD/" --bootefi --last-sealed-snapshot`
## No DisplayPort Output on Mac Pros with NVIDIA Kepler * Make sure again to rename "Macintosh HD" to what your drive name is
If you're having trouble with DisplayPort output on Mac Pros, try enabling Minimal Spoofing in Settings -> SMBIOS Settings and rebuild/install OpenCore. This will trick macOS drivers into thinking you have a newer MacPro7,1 and resolve the issue. Run OCLP root patcher again.
![](./images/OCLP-GUI-SMBIOS-Minimal.png) ## Reboot when entering Hibernation (`Sleep Wake Failure`)
## Volume Hash Mismatch Error in macOS Monterey [Known issue on some models](https://github.com/dortania/Opencore-Legacy-Patcher/issues/72), a temporary fix is to disable Hibernation by executing the following command in the terminal:
A semi-common popup some users face is the "Volume Hash Mismatch" error: ```
sudo pmset -a hibernatemode 0
<p align="center"> ```
<img src="./images/Hash-Mismatch.png">
</p> ## How to Boot Recovery through OpenCore Legacy Patcher
What this error signifies is that the OS detects that the boot volume's hash does not match what the OS is expecting, this error is generally cosmetic and can be ignored. However if your system starts to crash spontaneously shortly after, you'll want to reinstall macOS fresh without importing any data at first. By default, the patcher will try to hide extra boot options such as recovery from the user. To make them appear, simply press the `Spacebar` key while inside OpenCore's Picker to list all boot options.
* Note that this bug affects native Macs as well and is not due to issues with unsupported Macs: [OSX Daily: “Volume Hash Mismatch” Error in MacOS Monterey](https://osxdaily.com/2021/11/10/volume-hash-mismatch-error-in-macos-monterey/) ## Stuck on "Your Mac needs a firmware update"
Additionally, it can help to disable FeatureUnlock in Settings -> Misc Settings as this tool can be strenuous on systems with weaker memory stability. Full error: "Your Mac needs a firmware update in order to install to this Volume. Please select a Mac OS Extended (Journaled) volume instead."
## Cannot Disable SIP in recoveryOS This error occurs when macOS determines that the current firmware does not have full APFS support. To resolve this, when installing OpenCore, head to "Patcher Settings" and enable "Moderate SMBIOS Patching" or higher. This will ensure that the firmware reported will show support for full APFS capabilities.
With OCLP, the patcher will always overwrite the current SIP value on boot to ensure that users don't brick an installation after an NVRAM reset. However, for users wanting to disable SIP entirely, this can be done easily. ## No Brightness Control
Head into the GUI, go to Patcher Settings, and toggle the bits you need disabled from SIP: With OCLP v0.0.22, we've added support for brightness control on many models. However, some users may have noticed that their brightness keys do not work.
| SIP Enabled | SIP Lowered (Root Patching) | SIP Disabled | As a work-around, we recommend users try out the below app:
| :--- | :--- | :--- |
| ![](./images/OCLP-GUI-Settings-SIP-Enabled.png) | ![](./images/OCLP-GUI-Settings-SIP-Root-Patch.png) | ![](./images/OCLP-GUI-Settings-SIP-Disabled.png) | * [Brightness Slider](https://actproductions.net/free-apps/brightness-slider/)
## Intermediate issues with USB 1.1 and Bluetooth on MacPro3,1 - MacPro5,1 ## Cannot connect Wi-Fi on Monterey with legacy cards
For those experiencing issues with USB 1.1 devices (such as mice, keyboards and bluetooth chipsets), macOS Big Sur and newer have weakened OS-side reliability for the UHCI controller in older Mac Pros. With OCLP v0.2.5, we've added support for legacy Wi-Fi on Monterey. However, some users may have noticed that they can't connect to wireless networks.
* UHCI is a USB 1.1 controller that is hooked together with the USB 2.0 ports in your system. Whenever a USB 1.1 device is detected, the UHCI controller is given ownership of the device at a hardware/firmware level. To work-around this, we recommend that users manually connect using the "Other" option in the Wi-Fi menu bar or manually adding the network in the "Network" preference pane.
* EHCI is the USB 2.0 controller in older Mac Pros
## No Graphics Acceleration
Because of this, we recommend placing a USB 2.0/3.0 hub between your devices and the port on the Mac Pro. UHCI and EHCI cannot both be used at once, so using a USB hub will always force the EHCI controller on.
In macOS, GPU drivers are often dropped from the OS with each major release of it. With macOS Big Sur, currently, all non-Metal GPUs require additional patches to gain acceleration. In addition, macOS Monterey removed Graphics Drivers for both Intel Ivy Bridge and NVIDIA Kepler graphics processors.
* Alternatively, you can try cold-starting the hardware and see if macOS recognizes the UHCI controller properly.
If you're using OCLP v0.4.4, you should have been prompted to install Root Volume patches after the first boot from installation of macOS. If you need to do this manually, you can do so within the patcher app. Once rebooted, acceleration will be re-enabled as well as brightness control for laptops.
## Stuck on "Less than a minute remaining..."
## Black Screen on MacBookPro11,3 in macOS Monterey
A common area for systems to get "stuck", namely for units that are missing the `AES` CPU instruction/older mobile hardware. During this stage, a lot of heavy cryptography is performed, which can make systems appear to be stuck. In reality they are working quite hard to finish up the installation.
Due to Apple dropping NVIDIA Kepler support in macOS Monterey, [MacBookPro11,3's GMUX has difficulties switching back to the iGPU to display macOS correctly.](https://github.com/dortania/OpenCore-Legacy-Patcher/issues/522) To work-around this issue, boot the MacBookPro11,3 in Safe Mode and once macOS is installed, run OCLP's Post Install Root Patches to enable GPU Acceleration for the NVIDIA dGPU.
Because this step can take a few hours or more depending on drive speeds, be patient at this stage and do not manually power off or reboot your machine as this will break the installation and require you to reinstall. If you think your system has stalled, press the Caps Lock key. If the light turns on, your system is busy and not actually frozen.
* Safe Mode can be started by holding `Shift` + `Enter` when selecting macOS Monterey in OCLP's Boot Menu.
## No acceleration after a Metal GPU swap on Mac Pro
## No DisplayPort Output on Mac Pros with NVIDIA Kepler
If you finished installing Monterey with the original card installed (to see bootpicker for example) and swapped your GPU to a Metal supported one, you may notice that you're missing acceleration. To fix this, open OCLP and revert root patches to get your Metal-supported GPU work again.
If you're having trouble with DisplayPort output on Mac Pros, try enabling Minimal Spoofing in Settings -> SMBIOS Settings and rebuild/install OpenCore. This will trick macOS drivers into thinking you have a newer MacPro7,1 and resolve the issue.
Alternatively, you can remove "AutoPkg-Assets.pkg" from /Library/Packages on the USB drive before proceeding with the installation. To see the folder, enable hidden files with `Command` + `Shift` + `.`
![](./images/OCLP-GUI-SMBIOS-Minimal.png)
The reason for this is that the autopatcher will assume that you will be using the original graphics card and therefore does non-metal patching, which includes removing some drivers for other cards. This causes Metal cards to not accelerate after swapping.
## Volume Hash Mismatch Error in macOS Monterey
## Keyboard, Mouse and Trackpad not working in installer or after update
A semi-common popup some users face is the "Volume Hash Mismatch" error:
For Macs using legacy USB 1.1 controllers, OpenCore Legacy Patcher can only restore support once it has performed root volume patches. Thus to install macOS, you need to hook up a USB hub between your Mac and Keyboard/Mouse.
<p align="center">
* For MacBook users, you'll need to find an external keyboard/mouse in addition to the USB hub <img src="./images/Hash-Mismatch.png">
</p>
More information can be found here:
What this error signifies is that the OS detects that the boot volume's hash does not match what the OS is expecting, this error is generally cosmetic and can be ignored. However if your system starts to crash spontaneously shortly after, you'll want to reinstall macOS fresh without importing any data at first.
* [Legacy UHCI/OHCI support in Ventura #1021](https://github.com/dortania/OpenCore-Legacy-Patcher/issues/1021)
* Note that this bug affects native Macs as well and is not due to issues with unsupported Macs: [OSX Daily: “Volume Hash Mismatch” Error in MacOS Monterey](https://osxdaily.com/2021/11/10/volume-hash-mismatch-error-in-macos-monterey/)
Applicable models include:
Additionally, it can help to disable FeatureUnlock in Settings -> Misc Settings as this tool can be strenuous on systems with weaker memory stability.
| Family | Year | Model | Notes |
| :---------- | :--------------------| :---------------------------- | :----------------------------------------------- | ## Cannot Disable SIP in recoveryOS
| MacBook | Mid 2010 and older | MacBook5,1 - MacBook7,1 | |
| MacBook Air | Late 2010 and older | MacBookAir2,1 - MacBookAir3,x | | With OCLP, the patcher will always overwrite the current SIP value on boot to ensure that users don't brick an installation after an NVRAM reset. However, for users wanting to disable SIP entirely, this can be done easily.
| MacBook Pro | Mid 2010 and older | MacBookPro4,1 - MacBookPro7,x | Excludes Mid 2010 15" and 17" (MacBookPro6,x) |
| iMac | Late 2009 and older | iMac7,1 - iMac10,x | Excludes Core i5/7 27" late 2009 iMac (iMac11,1) | Head into the GUI, go to Patcher Settings, and toggle the bits you need disabled from SIP:
| Mac mini | Mid 2011 and older | Macmini3,1 - Macmini5,x | |
| Mac Pro | Mid 2010 and older | MacPro3,1 - MacPro5,1 | | | SIP Enabled | SIP Lowered (Root Patching) | SIP Disabled |
| :--- | :--- | :--- |
| ![](./images/OCLP-GUI-Settings-SIP-Enabled.png) | ![](./images/OCLP-GUI-Settings-SIP-Root-Patch.png) | ![](./images/OCLP-GUI-Settings-SIP-Disabled.png) |
![](./images/usb11-chart.png)
## Intermediate issues with USB 1.1 and Bluetooth on MacPro3,1 - MacPro5,1
For those experiencing issues with USB 1.1 devices (such as mice, keyboards and bluetooth chipsets), macOS Big Sur and newer have weakened OS-side reliability for the UHCI controller in older Mac Pros.
* UHCI is a USB 1.1 controller that is hooked together with the USB 2.0 ports in your system. Whenever a USB 1.1 device is detected, the UHCI controller is given ownership of the device at a hardware/firmware level.
* EHCI is the USB 2.0 controller in older Mac Pros
Because of this, we recommend placing a USB 2.0/3.0 hub between your devices and the port on the Mac Pro. UHCI and EHCI cannot both be used at once, so using a USB hub will always force the EHCI controller on.
* Alternatively, you can try cold-starting the hardware and see if macOS recognizes the UHCI controller properly.
## Stuck on "Less than a minute remaining..."
A common area for systems to get "stuck", namely for units that are missing the `AES` CPU instruction/older mobile hardware. During this stage, a lot of heavy cryptography is performed, which can make systems appear to be stuck. In reality they are working quite hard to finish up the installation.
Because this step can take a few hours or more depending on drive speeds, be patient at this stage and do not manually power off or reboot your machine as this will break the installation and require you to reinstall. If you think your system has stalled, press the Caps Lock key. If the light turns on, your system is busy and not actually frozen.
## No acceleration after a Metal GPU swap on Mac Pro
If you finished installing Monterey with the original card installed (to see bootpicker for example) and swapped your GPU to a Metal supported one, you may notice that you're missing acceleration. To fix this, open OCLP and revert root patches to get your Metal-supported GPU work again.
Alternatively, you can remove "AutoPkg-Assets.pkg" from /Library/Packages on the USB drive before proceeding with the installation. To see the folder, enable hidden files with `Command` + `Shift` + `.`
The reason for this is that the autopatcher will assume that you will be using the original graphics card and therefore does non-metal patching, which includes removing some drivers for other cards. This causes Metal cards to not accelerate after swapping.
## Keyboard, Mouse and Trackpad not working in installer or after update
For Macs using legacy USB 1.1 controllers, OpenCore Legacy Patcher can only restore support once it has performed root volume patches. Thus to install macOS, you need to hook up a USB hub between your Mac and Keyboard/Mouse.
* For MacBook users, you'll need to find an external keyboard/mouse in addition to the USB hub
More information can be found here:
* [Legacy UHCI/OHCI support in Ventura #1021](https://github.com/dortania/OpenCore-Legacy-Patcher/issues/1021)
Applicable models include:
| Family | Year | Model | Notes |
| :---------- | :--------------------| :---------------------------- | :----------------------------------------------- |
| MacBook | Mid 2010 and older | MacBook5,1 - MacBook7,1 | |
| MacBook Air | Late 2010 and older | MacBookAir2,1 - MacBookAir3,x | |
| MacBook Pro | Mid 2010 and older | MacBookPro4,1 - MacBookPro7,x | Excludes Mid 2010 15" and 17" (MacBookPro6,x) |
| iMac | Late 2009 and older | iMac7,1 - iMac10,x | Excludes Core i5/7 27" late 2009 iMac (iMac11,1) |
| Mac mini | Mid 2011 and older | Macmini3,1 - Macmini5,x | |
| Mac Pro | Mid 2010 and older | MacPro3,1 - MacPro5,1 | |
![](./images/usb11-chart.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 267 KiB

View File

@@ -13,8 +13,8 @@ from .detections import device_probe
class Constants: class Constants:
def __init__(self) -> None: def __init__(self) -> None:
# Patcher Versioning # Patcher Versioning
self.patcher_version: str = "1.5.0" # OpenCore-Legacy-Patcher self.patcher_version: str = "1.6.0" # OpenCore-Legacy-Patcher
self.patcher_support_pkg_version: str = "1.4.9" # PatcherSupportPkg self.patcher_support_pkg_version: str = "1.6.3" # PatcherSupportPkg
self.copyright_date: str = "Copyright © 2020-2024 Dortania" self.copyright_date: str = "Copyright © 2020-2024 Dortania"
self.patcher_name: str = "OpenCore Legacy Patcher" self.patcher_name: str = "OpenCore Legacy Patcher"

View File

@@ -29,6 +29,7 @@ class os_data(enum.IntEnum):
monterey = 21 monterey = 21
ventura = 22 ventura = 22
sonoma = 23 sonoma = 23
sequoia = 24
max_os = 99 max_os = 99

View File

@@ -91,10 +91,10 @@ class SystemPatchDictionary():
- AppleIntelHD4000Graphics.kext - AppleIntelHD4000Graphics.kext
""" """
if self.os_major < os_data.os_data.sonoma: if self.os_major < os_data.os_data.sonoma:
return "11.4" return "11.7.10"
if self.os_float < self.macOS_14_4: if self.os_float < self.macOS_14_4:
return "11.4-23" return "11.7.10-23"
return "11.4-23.4" return "11.7.10-23.4"
def __resolve_kepler_geforce_framebuffers(self) -> str: def __resolve_kepler_geforce_framebuffers(self) -> str:
@@ -509,8 +509,8 @@ class SystemPatchDictionary():
}, },
"Install": { "Install": {
"/System/Library/PrivateFrameworks": { "/System/Library/PrivateFrameworks": {
"AppleGVA.framework": "10.15.7", "AppleGVA.framework": "11.7.10",
"AppleGVACore.framework": "10.15.7", "AppleGVACore.framework": "11.7.10",
}, },
}, },
}, },
@@ -1031,13 +1031,13 @@ class SystemPatchDictionary():
}, },
"Install": { "Install": {
"/System/Library/Extensions": { "/System/Library/Extensions": {
"AppleIntelHD4000GraphicsGLDriver.bundle": "11.0 Beta 6", "AppleIntelHD4000GraphicsGLDriver.bundle": "11.7.10",
"AppleIntelHD4000GraphicsMTLDriver.bundle": "11.0 Beta 6" if self.os_major < os_data.os_data.ventura else "11.0-beta 6-22", "AppleIntelHD4000GraphicsMTLDriver.bundle": "11.7.10" if self.os_major < os_data.os_data.ventura else "11.7.10-22",
"AppleIntelHD4000GraphicsVADriver.bundle": "11.3 Beta 1", "AppleIntelHD4000GraphicsVADriver.bundle": "11.7.10",
"AppleIntelFramebufferCapri.kext": self.__resolve_ivy_bridge_framebuffers(), "AppleIntelFramebufferCapri.kext": self.__resolve_ivy_bridge_framebuffers(),
"AppleIntelHD4000Graphics.kext": self.__resolve_ivy_bridge_framebuffers(), "AppleIntelHD4000Graphics.kext": self.__resolve_ivy_bridge_framebuffers(),
"AppleIntelIVBVA.bundle": "11.4", "AppleIntelIVBVA.bundle": "11.7.10",
"AppleIntelGraphicsShared.bundle": "11.4", # libIGIL-Metal.dylib pulled from 11.0 Beta 6 "AppleIntelGraphicsShared.bundle": "11.7.10", # libIGIL-Metal.dylib pulled from 11.0 Beta 6
}, },
}, },
}, },
@@ -1239,12 +1239,12 @@ class SystemPatchDictionary():
"wifip2pd": "13.6.5", "wifip2pd": "13.6.5",
}, },
"/System/Library/Frameworks": { "/System/Library/Frameworks": {
"CoreWLAN.framework": "13.6.5", "CoreWLAN.framework": f"13.6.5-{self.os_major}",
}, },
"/System/Library/PrivateFrameworks": { "/System/Library/PrivateFrameworks": {
"CoreWiFi.framework": "13.6.5", "CoreWiFi.framework": f"13.6.5-{self.os_major}",
"IO80211.framework": "13.6.5", "IO80211.framework": f"13.6.5-{self.os_major}",
"WiFiPeerToPeer.framework": "13.6.5", "WiFiPeerToPeer.framework": f"13.6.5-{self.os_major}",
}, },
}, },
}, },
@@ -1406,7 +1406,7 @@ class SystemPatchDictionary():
}, },
"Install": { "Install": {
"/System/Library/Frameworks": { "/System/Library/Frameworks": {
"LocalAuthentication.framework": "13.6" # Required for Password Authentication (SharedUtils.framework) "LocalAuthentication.framework": f"13.6-{self.os_major}" # Required for Password Authentication (SharedUtils.framework)
}, },
"/System/Library/PrivateFrameworks": { "/System/Library/PrivateFrameworks": {
"EmbeddedOSInstall.framework": "13.6" # Required for biometrickitd "EmbeddedOSInstall.framework": "13.6" # Required for biometrickitd

View File

@@ -142,11 +142,15 @@ class BuildGraphicsAudio:
iMac MXM dGPU Backlight DevicePath Detection iMac MXM dGPU Backlight DevicePath Detection
""" """
if not self.constants.custom_model and self.computer.dgpu and self.computer.dgpu.pci_path: if not self.constants.custom_model:
for i, device in enumerate(self.computer.gpus): for i, device in enumerate(self.computer.gpus):
logging.info(f"- Found dGPU ({i + 1}): {utilities.friendly_hex(device.vendor_id)}:{utilities.friendly_hex(device.device_id)}") logging.info(f"- Found dGPU ({i + 1}): {utilities.friendly_hex(device.vendor_id)}:{utilities.friendly_hex(device.device_id)}")
self.config["#Revision"][f"Hardware-iMac-dGPU-{i + 1}"] = f"{utilities.friendly_hex(device.vendor_id)}:{utilities.friendly_hex(device.device_id)}" self.config["#Revision"][f"Hardware-iMac-dGPU-{i + 1}"] = f"{utilities.friendly_hex(device.vendor_id)}:{utilities.friendly_hex(device.device_id)}"
# Work-around for AMD Navi MXM cards with PCIe bridge
if not self.computer.dgpu:
self.computer.dgpu=self.computer.gpus[i]
if device.pci_path != self.computer.dgpu.pci_path: if device.pci_path != self.computer.dgpu.pci_path:
logging.info("- device path and GFX0 Device path are different") logging.info("- device path and GFX0 Device path are different")
self.gfx0_path = device.pci_path self.gfx0_path = device.pci_path

View File

@@ -150,9 +150,10 @@ class BuildStorage:
# Restore S1X/S3X NVMe support removed in 14.0 Beta 2 # Restore S1X/S3X NVMe support removed in 14.0 Beta 2
# Apple's usage of the S1X and S3X is quite sporadic and inconsistent, so we'll try a catch all for units with NVMe drives # Apple's usage of the S1X and S3X is quite sporadic and inconsistent, so we'll try a catch all for units with NVMe drives
# Additionally expanded to cover all Mac models with the 12+16 pin SSD layout, for older machines with newer drives
if self.constants.custom_model and self.model in smbios_data.smbios_dictionary: if self.constants.custom_model and self.model in smbios_data.smbios_dictionary:
if "CPU Generation" in smbios_data.smbios_dictionary[self.model]: if "CPU Generation" in smbios_data.smbios_dictionary[self.model]:
if cpu_data.CPUGen.broadwell <= smbios_data.smbios_dictionary[self.model]["CPU Generation"] <= cpu_data.CPUGen.kaby_lake: if (cpu_data.CPUGen.haswell <= smbios_data.smbios_dictionary[self.model]["CPU Generation"] <= cpu_data.CPUGen.kaby_lake) or self.model in [ "MacPro6,1" ]:
support.BuildSupport(self.model, self.constants, self.config).enable_kext("IOS3XeFamily.kext", self.constants.s3x_nvme_version, self.constants.s3x_nvme_path) support.BuildSupport(self.model, self.constants, self.config).enable_kext("IOS3XeFamily.kext", self.constants.s3x_nvme_version, self.constants.s3x_nvme_path)
# Apple RAID Card check # Apple RAID Card check
@@ -193,4 +194,4 @@ class BuildStorage:
if self.constants.apfs_trim_timeout is False: if self.constants.apfs_trim_timeout is False:
logging.info(f"- Disabling APFS TRIM timeout") logging.info(f"- Disabling APFS TRIM timeout")
self.config["Kernel"]["Quirks"]["SetApfsTrimTimeout"] = 0 self.config["Kernel"]["Quirks"]["SetApfsTrimTimeout"] = 0

View File

@@ -0,0 +1,111 @@
"""
sucatalog: Python module for querying Apple's Software Update Catalog, supporting Tiger through Sequoia.
-------------------
## Usage
### Get Software Update Catalog URL
```python
>>> import sucatalog
>>> # Defaults to PublicRelease seed
>>> url = sucatalog.CatalogURL().url
"https://swscan.apple.com/.../index-15-14-13-12-10.16-10.15-10.14-10.13-10.12-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog"
>>> url = sucatalog.CatalogURL(seed=sucatalog.SeedType.DeveloperSeed).url
"https://swscan.apple.com/.../index-15seed-15-14-13-12-10.16-10.15-10.14-10.13-10.12-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog"
>>> url = sucatalog.CatalogURL(version=sucatalog.CatalogVersion.HIGH_SIERRA).url
"https://swscan.apple.com/.../index-10.13-10.12-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog"
```
### Parse Software Update Catalog - InstallAssistants only
>>> import sucatalog
>>> # Pass contents of URL (as dictionary)
>>> catalog = plistlib.loads(requests.get(url).content)
>>> products = sucatalog.CatalogProducts(catalog).products
[
{
'Build': '22G720',
'Catalog': <SeedType.PublicRelease: ''>,
'InstallAssistant': {
'IntegrityDataSize': 42008,
'IntegrityDataURL': 'https://swcdn.apple.com/.../InstallAssistant.pkg.integrityDataV1',
'Size': 12210304673,
'URL': 'https://swcdn.apple.com/.../InstallAssistant.pkg'
},
'PostDate': datetime.datetime(2024, 5, 20, 17, 18, 21),
'ProductID': '052-96247',
'Title': 'macOS Ventura',
'Version': '13.6.7'
}
]
### Parse Software Update Catalog - All products
By default, `CatalogProducts` will only return InstallAssistants. To get all products, set `install_assistants_only=False`.
>>> import sucatalog
>>> # Pass contents of URL (as dictionary)
>>> products = sucatalog.CatalogProducts(catalog, install_assistants_only=False).products
[
{
'Build': None,
'Catalog': None,
'Packages': [
{
'MetadataURL': 'https://swdist.apple.com/.../iLifeSlideshow_v2.pkm',
'Size': 116656956,
'URL': 'http://swcdn.apple.com/.../iLifeSlideshow_v2.pkg'
},
{
'MetadataURL': 'https://swdist.apple.com/.../iPhoto9.2.3ContentUpdate.pkm',
'Size': 59623907,
'URL': 'http://swcdn.apple.com/.../iPhoto9.2.3ContentUpdate.pkg'
},
{
'MetadataURL': 'https://swdist.apple.com/.../iPhoto9.2.3Update.pkm',
'Size': 197263405,
'URL': 'http://swcdn.apple.com/.../iPhoto9.2.3Update.pkg'
}
],
'PostDate': datetime.datetime(2019, 10, 23, 0, 2, 42),
'ProductID': '041-85230',
'Title': 'iPhoto Update',
'Version': '9.2.3'
},
{
'Build': None,
'Catalog': None,
'Packages': [
{
'Digest': '9aba109078feec7ea841529e955440b63d7755a0',
'MetadataURL': 'https://swdist.apple.com/.../iPhoto9.4.3Update.pkm',
'Size': 555246460,
'URL': 'http://swcdn.apple.com/.../iPhoto9.4.3Update.pkg'
},
{
'Digest': '0bb013221ca2df5e178d950cb229f41b8e680d00',
'MetadataURL': 'https://swdist.apple.com/.../iPhoto9.4.3ContentUpdate.pkm',
'Size': 213073666,
'URL': 'http://swcdn.apple.com/.../iPhoto9.4.3ContentUpdate.pkg'
}
],
'PostDate': datetime.datetime(2019, 10, 13, 3, 23, 14),
'ProductID': '041-88859',
'Title': 'iPhoto Update',
'Version': '9.4.3'
}
]
"""
from .url import CatalogURL
from .constants import CatalogVersion, SeedType
from .products import CatalogProducts

View File

@@ -0,0 +1,57 @@
"""
constants.py: Enumerations for sucatalog-py
"""
from enum import StrEnum
class SeedType(StrEnum):
"""
Enum for catalog types
Variants:
DeveloperSeed: Developer Beta (Part of the Apple Developer Program)
PublicSeed: Public Beta
CustomerSeed: AppleSeed Program (Generally mirrors DeveloperSeed)
PublicRelease: Public Release
"""
DeveloperSeed: str = "seed"
PublicSeed: str = "beta"
CustomerSeed: str = "customerseed"
PublicRelease: str = ""
class CatalogVersion(StrEnum):
"""
Enum for macOS versions
Used for generating sucatalog URLs
"""
SEQUOIA: str = "15"
SONOMA: str = "14"
VENTURA: str = "13"
MONTEREY: str = "12"
BIG_SUR: str = "11"
BIG_SUR_LEGACY: str = "10.16"
CATALINA: str = "10.15"
MOJAVE: str = "10.14"
HIGH_SIERRA: str = "10.13"
SIERRA: str = "10.12"
EL_CAPITAN: str = "10.11"
YOSEMITE: str = "10.10"
MAVERICKS: str = "10.9"
MOUNTAIN_LION: str = "mountainlion"
LION: str = "lion"
SNOW_LEOPARD: str = "snowleopard"
LEOPARD: str = "leopard"
TIGER: str = ""
class CatalogExtension(StrEnum):
"""
Enum for catalog extensions
Used for generating sucatalog URLs
"""
PLIST: str = ".sucatalog"
GZIP: str = ".sucatalog.gz"

View File

@@ -0,0 +1,407 @@
"""
products.py: Parse products from Software Update Catalog
"""
import re
import plistlib
import packaging.version
import xml.etree.ElementTree as ET
from pathlib import Path
from functools import cached_property
from .url import CatalogURL
from .constants import CatalogVersion, SeedType
from ..support import network_handler
class CatalogProducts:
"""
Args:
catalog (dict): Software Update Catalog (contents of CatalogURL's URL)
install_assistants_only (bool): Only list InstallAssistant products
only_vmm_install_assistants (bool): Only list VMM-x86_64-compatible InstallAssistant products
max_install_assistant_version (CatalogVersion): Maximum InstallAssistant version to list
"""
def __init__(self,
catalog: dict,
install_assistants_only: bool = True,
only_vmm_install_assistants: bool = True,
max_install_assistant_version: CatalogVersion = CatalogVersion.SONOMA
) -> None:
self.catalog: dict = catalog
self.ia_only: bool = install_assistants_only
self.vmm_only: bool = only_vmm_install_assistants
self.max_ia_version: packaging = packaging.version.parse(f"{max_install_assistant_version.value}.99.99")
self.max_ia_catalog: CatalogVersion = max_install_assistant_version
def _legacy_parse_info_plist(self, data: dict) -> dict:
"""
Legacy version of parsing for installer details through Info.plist
"""
if "MobileAssetProperties" not in data:
return {}
if "SupportedDeviceModels" not in data["MobileAssetProperties"]:
return {}
if "OSVersion" not in data["MobileAssetProperties"]:
return {}
if "Build" not in data["MobileAssetProperties"]:
return {}
# Ensure Apple Silicon specific Installers are not listed
if "VMM-x86_64" not in data["MobileAssetProperties"]["SupportedDeviceModels"]:
if self.vmm_only:
return {}
version = data["MobileAssetProperties"]["OSVersion"]
build = data["MobileAssetProperties"]["Build"]
catalog = ""
try:
catalog = data["MobileAssetProperties"]["BridgeVersionInfo"]["CatalogURL"]
except KeyError:
pass
if any([version, build]) is None:
return {}
return {
"Version": version,
"Build": build,
"Catalog": CatalogURL().catalog_url_to_seed(catalog),
}
def _parse_mobile_asset_plist(self, data: dict) -> dict:
"""
Parses the MobileAsset plist for installer details
With macOS Sequoia, the Info.plist is no longer present in the InstallAssistant's assets
"""
for entry in data["Assets"]:
if "SupportedDeviceModels" not in entry:
continue
if "OSVersion" not in entry:
continue
if "Build" not in entry:
continue
if "VMM-x86_64" not in entry["SupportedDeviceModels"]:
if self.vmm_only:
continue
build = entry["Build"]
version = entry["OSVersion"]
catalog_url = ""
try:
catalog_url = entry["BridgeVersionInfo"]["CatalogURL"]
except KeyError:
pass
return {
"Version": version,
"Build": build,
"Catalog": CatalogURL().catalog_url_to_seed(catalog_url),
}
return {}
def _parse_english_distributions(self, data: bytes) -> dict:
"""
Resolve Title, Build and Version from the English distribution file
"""
try:
plist_contents = plistlib.loads(data)
except plistlib.InvalidFileException:
plist_contents = None
try:
xml_contents = ET.fromstring(data)
except ET.ParseError:
xml_contents = None
_product_map = {
"Title": None,
"Build": None,
"Version": None,
}
if plist_contents:
if "macOSProductBuildVersion" in plist_contents:
_product_map["Build"] = plist_contents["macOSProductBuildVersion"]
if "macOSProductVersion" in plist_contents:
_product_map["Version"] = plist_contents["macOSProductVersion"]
if "BUILD" in plist_contents:
_product_map["Build"] = plist_contents["BUILD"]
if "VERSION" in plist_contents:
_product_map["Version"] = plist_contents["VERSION"]
if xml_contents:
# Fetch item title
item_title = xml_contents.find(".//title").text
if item_title in ["SU_TITLE", "MANUAL_TITLE", "MAN_TITLE"]:
# regex search the contents for the title
title_search = re.search(r'"SU_TITLE"\s*=\s*"(.*)";', data.decode("utf-8"))
if title_search:
item_title = title_search.group(1)
_product_map["Title"] = item_title
return _product_map
def _build_installer_name(self, version: str, catalog: SeedType) -> str:
"""
Builds the installer name based on the version and catalog
"""
try:
marketing_name = CatalogVersion(version.split(".")[0]).name
except ValueError:
marketing_name = "Unknown"
# Replace _ with space
marketing_name = marketing_name.replace("_", " ")
# Convert to upper for each word
marketing_name = "macOS " + " ".join([word.capitalize() for word in marketing_name.split()])
# Append Beta if needed
if catalog in [SeedType.DeveloperSeed, SeedType.PublicSeed, SeedType.CustomerSeed]:
marketing_name += " Beta"
return marketing_name
def _list_latest_installers_only(self, products: list) -> list:
"""
List only the latest installers per macOS version
macOS versions capped at n-3 (n being the latest macOS version)
"""
supported_versions = []
# Build list of supported versions (n to n-3, where n is the latest macOS version set)
did_find_latest = False
for version in CatalogVersion:
if did_find_latest is False:
if version != self.max_ia_catalog:
continue
did_find_latest = True
supported_versions.append(version)
if len(supported_versions) == 4:
break
# Invert the list
supported_versions = supported_versions[::-1]
# Create duplicate product list
products_copy = products.copy()
# Remove all but the newest version
for version in supported_versions:
_newest_version = packaging.version.parse("0.0.0")
# First, determine largest version
for installer in products:
if installer["Version"] is None:
continue
if not installer["Version"].startswith(version.value):
continue
if installer["Catalog"] in [SeedType.CustomerSeed, SeedType.DeveloperSeed, SeedType.PublicSeed]:
continue
try:
if packaging.version.parse(installer["Version"]) > _newest_version:
_newest_version = packaging.version.parse(installer["Version"])
except packaging.version.InvalidVersion:
pass
# Next, remove all installers that are not the newest version
for installer in products:
if installer["Version"] is None:
continue
if not installer["Version"].startswith(version.value):
continue
try:
if packaging.version.parse(installer["Version"]) < _newest_version:
if installer in products_copy:
products_copy.pop(products_copy.index(installer))
except packaging.version.InvalidVersion:
pass
# Remove beta versions if a public release is available
if _newest_version != packaging.version.parse("0.0.0"):
if installer["Catalog"] in [SeedType.CustomerSeed, SeedType.DeveloperSeed, SeedType.PublicSeed]:
if installer in products_copy:
products_copy.pop(products_copy.index(installer))
# Remove EOL versions (older than n-3)
for installer in products:
if installer["Version"].split(".")[0] < supported_versions[-4].value:
if installer in products_copy:
products_copy.pop(products_copy.index(installer))
return products_copy
@cached_property
def products(self) -> None:
"""
Returns a list of products from the sucatalog
"""
catalog = self.catalog
_products = []
for product in catalog["Products"]:
# InstallAssistants.pkgs (macOS Installers) will have the following keys:
if self.ia_only:
if "ExtendedMetaInfo" not in catalog["Products"][product]:
continue
if "InstallAssistantPackageIdentifiers" not in catalog["Products"][product]["ExtendedMetaInfo"]:
continue
if "SharedSupport" not in catalog["Products"][product]["ExtendedMetaInfo"]["InstallAssistantPackageIdentifiers"]:
continue
_product_map = {
"ProductID": product,
"PostDate": catalog["Products"][product]["PostDate"],
"Title": None,
"Build": None,
"Version": None,
"Catalog": None,
# Optional keys if not InstallAssistant only:
# "Packages": None,
# Optional keys if InstallAssistant found:
# "InstallAssistant": {
# "URL": None,
# "Size": None,
# "XNUMajor": None,
# "IntegrityDataURL": None,
# "IntegrityDataSize": None
# },
}
# InstallAssistant logic
if "Packages" in catalog["Products"][product]:
# Add packages to product map if not InstallAssistant only
if self.ia_only is False:
_product_map["Packages"] = catalog["Products"][product]["Packages"]
for package in catalog["Products"][product]["Packages"]:
if "URL" in package:
if Path(package["URL"]).name == "InstallAssistant.pkg":
_product_map["InstallAssistant"] = {
"URL": package["URL"],
"Size": package["Size"],
"IntegrityDataURL": package["IntegrityDataURL"],
"IntegrityDataSize": package["IntegrityDataSize"]
}
if Path(package["URL"]).name not in ["Info.plist", "com_apple_MobileAsset_MacSoftwareUpdate.plist"]:
continue
net_obj = network_handler.NetworkUtilities().get(package["URL"])
if net_obj is None:
continue
contents = net_obj.content
try:
plist_contents = plistlib.loads(contents)
except plistlib.InvalidFileException:
continue
if plist_contents:
if Path(package["URL"]).name == "Info.plist":
_product_map.update(self._legacy_parse_info_plist(plist_contents))
else:
_product_map.update(self._parse_mobile_asset_plist(plist_contents))
if _product_map["Version"] is not None:
_product_map["Title"] = self._build_installer_name(_product_map["Version"], _product_map["Catalog"])
# Fall back to English distribution if no version is found
if _product_map["Version"] is None:
url = None
if "Distributions" in catalog["Products"][product]:
if "English" in catalog["Products"][product]["Distributions"]:
url = catalog["Products"][product]["Distributions"]["English"]
elif "en" in catalog["Products"][product]["Distributions"]:
url = catalog["Products"][product]["Distributions"]["en"]
if url is None:
continue
net_obj = network_handler.NetworkUtilities().get(url)
if net_obj is None:
continue
contents = net_obj.content
_product_map.update(self._parse_english_distributions(contents))
if _product_map["Version"] is None:
if "ServerMetadataURL" in catalog["Products"][product]:
server_metadata_url = catalog["Products"][product]["ServerMetadataURL"]
net_obj = network_handler.NetworkUtilities().get(server_metadata_url)
if net_obj is None:
continue
server_metadata_contents = net_obj.content
try:
server_metadata_plist = plistlib.loads(server_metadata_contents)
except plistlib.InvalidFileException:
pass
if "CFBundleShortVersionString" in server_metadata_plist:
_product_map["Version"] = server_metadata_plist["CFBundleShortVersionString"]
if _product_map["Version"] is not None:
# Check if version is newer than the max version
if self.ia_only:
try:
if packaging.version.parse(_product_map["Version"]) > self.max_ia_version:
continue
except packaging.version.InvalidVersion:
pass
if _product_map["Build"] is not None:
if "InstallAssistant" in _product_map:
try:
# Grab first 2 characters of build
_product_map["InstallAssistant"]["XNUMajor"] = int(_product_map["Build"][:2])
except ValueError:
pass
# If version is still None, set to 0.0.0
if _product_map["Version"] is None:
_product_map["Version"] = "0.0.0"
_products.append(_product_map)
_products = sorted(_products, key=lambda x: x["Version"])
return _products
@cached_property
def latest_products(self) -> list:
"""
Returns a list of the latest products from the sucatalog
"""
return self._list_latest_installers_only(self.products)

View File

@@ -0,0 +1,175 @@
"""
url.py: Generate URL for Software Update Catalog
Usage:
>>> import sucatalog
>>> catalog_url = sucatalog.CatalogURL().url
https://swscan.apple.com/content/catalogs/others/index-15seed-15-14-13-12-10.16-10.15-10.14-10.13-10.12-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog
"""
import logging
import plistlib
from .constants import (
SeedType,
CatalogVersion,
CatalogExtension
)
from ..support import network_handler
class CatalogURL:
"""
Provides URL generation for Software Update Catalog
Args:
version (CatalogVersion): Version of macOS
seed (SeedType): Seed type
extension (CatalogExtension): Extension for the catalog URL
"""
def __init__(self,
version: CatalogVersion = CatalogVersion.SONOMA,
seed: SeedType = SeedType.PublicRelease,
extension: CatalogExtension = CatalogExtension.PLIST
) -> None:
self.version = version
self.seed = seed
self.extension = extension
self.seed = self._fix_seed_type()
self.version = self._fix_version()
def _fix_seed_type(self) -> SeedType:
"""
Fixes seed type for URL generation
"""
# Pre-Mountain Lion lacked seed types
if self.version in [CatalogVersion.LION, CatalogVersion.SNOW_LEOPARD, CatalogVersion.LEOPARD, CatalogVersion.TIGER]:
if self.seed != SeedType.PublicRelease:
logging.warning(f"{self.seed.name} not supported for {self.version.name}, defaulting to PublicRelease")
return SeedType.PublicRelease
# Pre-Yosemite lacked PublicSeed/CustomerSeed, thus override to DeveloperSeed
if self.version in [CatalogVersion.MAVERICKS, CatalogVersion.MOUNTAIN_LION]:
if self.seed in [SeedType.PublicSeed, SeedType.CustomerSeed]:
logging.warning(f"{self.seed.name} not supported for {self.version.name}, defaulting to DeveloperSeed")
return SeedType.DeveloperSeed
return self.seed
def _fix_version(self) -> CatalogVersion:
"""
Fixes version for URL generation
"""
if self.version == CatalogVersion.BIG_SUR:
return CatalogVersion.BIG_SUR_LEGACY
return self.version
def _fetch_versions_for_url(self) -> list:
"""
Fetches versions for URL generation
"""
versions: list = []
_did_hit_variant: bool = False
for variant in CatalogVersion:
# Avoid appending versions newer than the current version
if variant == self.version:
_did_hit_variant = True
if _did_hit_variant is False:
continue
# Skip invalid version
if variant in [CatalogVersion.BIG_SUR, CatalogVersion.TIGER]:
continue
versions.append(variant.value)
if self.version == CatalogVersion.SNOW_LEOPARD:
# Reverse list pre-Lion (ie. just Snow Leopard, since Lion is a list of one)
versions = versions[::-1]
return versions
def _construct_catalog_url(self) -> str:
"""
Constructs the catalog URL based on the seed type
"""
url: str = "https://swscan.apple.com/content/catalogs"
if self.version == CatalogVersion.TIGER:
url += "/index"
else:
url += "/others/index"
if self.seed in [SeedType.DeveloperSeed, SeedType.PublicSeed, SeedType.CustomerSeed]:
url += f"-{self.version.value}"
if self.version == CatalogVersion.MAVERICKS and self.seed == SeedType.CustomerSeed:
# Apple previously used 'publicseed' for CustomerSeed in Mavericks
url += "publicseed"
else:
url += f"{self.seed.value}"
# 10.10 and older don't append versions for CustomerSeed
if self.seed == SeedType.CustomerSeed and self.version in [
CatalogVersion.YOSEMITE,
CatalogVersion.MAVERICKS,
CatalogVersion.MOUNTAIN_LION,
CatalogVersion.LION,
CatalogVersion.SNOW_LEOPARD,
CatalogVersion.LEOPARD
]:
pass
else:
for version in self._fetch_versions_for_url():
url += f"-{version}"
if self.version != CatalogVersion.TIGER:
url += ".merged-1"
url += self.extension.value
return url
def catalog_url_to_seed(self, catalog_url: str) -> SeedType:
"""
Converts the Catalog URL to a SeedType
"""
if "beta" in catalog_url:
return SeedType.PublicSeed
elif "customerseed" in catalog_url:
return SeedType.CustomerSeed
elif "seed" in catalog_url:
return SeedType.DeveloperSeed
return SeedType.PublicRelease
@property
def url(self) -> str:
"""
Generate URL for Software Update Catalog
Returns:
str: URL for Software Update Catalog
"""
return self._construct_catalog_url()
@property
def url_contents(self) -> dict:
"""
Return URL contents
"""
try:
return plistlib.loads(network_handler.NetworkUtilities().get(self.url).content)
except Exception as e:
logging.error(f"Failed to fetch URL contents: {e}")
return None

View File

@@ -17,15 +17,14 @@ from .. import constants
from ..wx_gui import gui_entry from ..wx_gui import gui_entry
from ..efi_builder import build from ..efi_builder import build
from ..sys_patch import sys_patch
from ..sys_patch.auto_patcher import StartAutomaticPatching
from ..datasets import ( from ..datasets import (
model_array, model_array,
os_data os_data
) )
from ..sys_patch import (
sys_patch,
sys_patch_auto
)
from . import ( from . import (
utilities, utilities,
defaults, defaults,
@@ -118,7 +117,7 @@ class arguments:
""" """
logging.info("Set Auto patching") logging.info("Set Auto patching")
sys_patch_auto.AutomaticSysPatch(self.constants).start_auto_patch() StartAutomaticPatching(self.constants).start_auto_patch()
def _prepare_for_update_handler(self) -> None: def _prepare_for_update_handler(self) -> None:

View File

@@ -1,292 +0,0 @@
#################################################################################
# Copyright (C) 2009-2011 Vladimir "Farcaller" Pouzanov <farcaller@gmail.com> #
# #
# Permission is hereby granted, free of charge, to any person obtaining a copy #
# of this software and associated documentation files (the "Software"), to deal #
# in the Software without restriction, including without limitation the rights #
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell #
# copies of the Software, and to permit persons to whom the Software is #
# furnished to do so, subject to the following conditions: #
# #
# The above copyright notice and this permission notice shall be included in #
# all copies or substantial portions of the Software. #
# #
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR #
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, #
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE #
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER #
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, #
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN #
# THE SOFTWARE. #
#################################################################################
import struct
import codecs
from datetime import datetime, timedelta
class BPListWriter(object):
def __init__(self, objects):
self.bplist = ""
self.objects = objects
def binary(self):
'''binary -> string
Generates bplist
'''
self.data = 'bplist00'
# TODO: flatten objects and count max length size
# TODO: write objects and save offsets
# TODO: write offsets
# TODO: write metadata
return self.data
def write(self, filename):
'''
Writes bplist to file
'''
if self.bplist != "":
pass
# TODO: save self.bplist to file
else:
raise Exception('BPlist not yet generated')
class BPListReader(object):
def __init__(self, s):
self.data = s
self.objects = []
self.resolved = {}
def __unpackIntStruct(self, sz, s):
'''__unpackIntStruct(size, string) -> int
Unpacks the integer of given size (1, 2 or 4 bytes) from string
'''
if sz == 1:
ot = '!B'
elif sz == 2:
ot = '!H'
elif sz == 4:
ot = '!I'
elif sz == 8:
ot = '!Q'
else:
raise Exception('int unpack size '+str(sz)+' unsupported')
return struct.unpack(ot, s)[0]
def __unpackInt(self, offset):
'''__unpackInt(offset) -> int
Unpacks int field from plist at given offset
'''
return self.__unpackIntMeta(offset)[1]
def __unpackIntMeta(self, offset):
'''__unpackIntMeta(offset) -> (size, int)
Unpacks int field from plist at given offset and returns its size and value
'''
obj_header = self.data[offset]
obj_type, obj_info = (obj_header & 0xF0), (obj_header & 0x0F)
int_sz = 2**obj_info
return int_sz, self.__unpackIntStruct(int_sz, self.data[offset+1:offset+1+int_sz])
def __resolveIntSize(self, obj_info, offset):
'''__resolveIntSize(obj_info, offset) -> (count, offset)
Calculates count of objref* array entries and returns count and offset to first element
'''
if obj_info == 0x0F:
ofs, obj_count = self.__unpackIntMeta(offset+1)
objref = offset+2+ofs
else:
obj_count = obj_info
objref = offset+1
return obj_count, objref
def __unpackFloatStruct(self, sz, s):
'''__unpackFloatStruct(size, string) -> float
Unpacks the float of given size (4 or 8 bytes) from string
'''
if sz == 4:
ot = '!f'
elif sz == 8:
ot = '!d'
else:
raise Exception('float unpack size '+str(sz)+' unsupported')
return struct.unpack(ot, s)[0]
def __unpackFloat(self, offset):
'''__unpackFloat(offset) -> float
Unpacks float field from plist at given offset
'''
obj_header = self.data[offset]
obj_type, obj_info = (obj_header & 0xF0), (obj_header & 0x0F)
int_sz = 2**obj_info
return int_sz, self.__unpackFloatStruct(int_sz, self.data[offset+1:offset+1+int_sz])
def __unpackDate(self, offset):
td = int(struct.unpack(">d", self.data[offset+1:offset+9])[0])
return datetime(year=2001,month=1,day=1) + timedelta(seconds=td)
def __unpackItem(self, offset):
'''__unpackItem(offset)
Unpacks and returns an item from plist
'''
obj_header = self.data[offset]
obj_type, obj_info = (obj_header & 0xF0), (obj_header & 0x0F)
if obj_type == 0x00:
if obj_info == 0x00: # null 0000 0000
return None
elif obj_info == 0x08: # bool 0000 1000 // false
return False
elif obj_info == 0x09: # bool 0000 1001 // true
return True
elif obj_info == 0x0F: # fill 0000 1111 // fill byte
raise Exception("0x0F Not Implemented") # this is really pad byte, FIXME
else:
raise Exception('unpack item type '+str(obj_header)+' at '+str(offset)+ 'failed')
elif obj_type == 0x10: # int 0001 nnnn ... // # of bytes is 2^nnnn, big-endian bytes
return self.__unpackInt(offset)
elif obj_type == 0x20: # real 0010 nnnn ... // # of bytes is 2^nnnn, big-endian bytes
return self.__unpackFloat(offset)
elif obj_type == 0x30: # date 0011 0011 ... // 8 byte float follows, big-endian bytes
return self.__unpackDate(offset)
elif obj_type == 0x40: # data 0100 nnnn [int] ... // nnnn is number of bytes unless 1111 then int count follows, followed by bytes
obj_count, objref = self.__resolveIntSize(obj_info, offset)
return self.data[objref:objref+obj_count] # XXX: we return data as str
elif obj_type == 0x50: # string 0101 nnnn [int] ... // ASCII string, nnnn is # of chars, else 1111 then int count, then bytes
obj_count, objref = self.__resolveIntSize(obj_info, offset)
return self.data[objref:objref+obj_count]
elif obj_type == 0x60: # string 0110 nnnn [int] ... // Unicode string, nnnn is # of chars, else 1111 then int count, then big-endian 2-byte uint16_t
obj_count, objref = self.__resolveIntSize(obj_info, offset)
return self.data[objref:objref+obj_count*2].decode('utf-16be')
elif obj_type == 0x80: # uid 1000 nnnn ... // nnnn+1 is # of bytes
# FIXME: Accept as a string for now
obj_count, objref = self.__resolveIntSize(obj_info, offset)
return self.data[objref:objref+obj_count]
elif obj_type == 0xA0: # array 1010 nnnn [int] objref* // nnnn is count, unless '1111', then int count follows
obj_count, objref = self.__resolveIntSize(obj_info, offset)
arr = []
for i in range(obj_count):
arr.append(self.__unpackIntStruct(self.object_ref_size, self.data[objref+i*self.object_ref_size:objref+i*self.object_ref_size+self.object_ref_size]))
return arr
elif obj_type == 0xC0: # set 1100 nnnn [int] objref* // nnnn is count, unless '1111', then int count follows
# XXX: not serializable via apple implementation
raise Exception("0xC0 Not Implemented") # FIXME: implement
elif obj_type == 0xD0: # dict 1101 nnnn [int] keyref* objref* // nnnn is count, unless '1111', then int count follows
obj_count, objref = self.__resolveIntSize(obj_info, offset)
keys = []
for i in range(obj_count):
keys.append(self.__unpackIntStruct(self.object_ref_size, self.data[objref+i*self.object_ref_size:objref+i*self.object_ref_size+self.object_ref_size]))
values = []
objref += obj_count*self.object_ref_size
for i in range(obj_count):
values.append(self.__unpackIntStruct(self.object_ref_size, self.data[objref+i*self.object_ref_size:objref+i*self.object_ref_size+self.object_ref_size]))
dic = {}
for i in range(obj_count):
dic[keys[i]] = values[i]
return dic
else:
raise Exception('don\'t know how to unpack obj type '+hex(obj_type)+' at '+str(offset))
def __resolveObject(self, idx):
try:
return self.resolved[idx]
except KeyError:
obj = self.objects[idx]
if type(obj) == list:
newArr = []
for i in obj:
newArr.append(self.__resolveObject(i))
self.resolved[idx] = newArr
return newArr
if type(obj) == dict:
newDic = {}
for k,v in obj.items():
key_resolved = self.__resolveObject(k)
if isinstance(key_resolved, str):
rk = key_resolved
else:
rk = codecs.decode(key_resolved, "utf-8")
rv = self.__resolveObject(v)
newDic[rk] = rv
self.resolved[idx] = newDic
return newDic
else:
self.resolved[idx] = obj
return obj
def parse(self):
# read header
if self.data[:8] != b'bplist00':
raise Exception('Bad magic')
# read trailer
self.offset_size, self.object_ref_size, self.number_of_objects, self.top_object, self.table_offset = struct.unpack('!6xBB4xI4xI4xI', self.data[-32:])
#print "** plist offset_size:",self.offset_size,"objref_size:",self.object_ref_size,"num_objs:",self.number_of_objects,"top:",self.top_object,"table_ofs:",self.table_offset
# read offset table
self.offset_table = self.data[self.table_offset:-32]
self.offsets = []
ot = self.offset_table
for i in range(self.number_of_objects):
offset_entry = ot[:self.offset_size]
ot = ot[self.offset_size:]
self.offsets.append(self.__unpackIntStruct(self.offset_size, offset_entry))
#print "** plist offsets:",self.offsets
# read object table
self.objects = []
k = 0
for i in self.offsets:
obj = self.__unpackItem(i)
#print "** plist unpacked",k,type(obj),obj,"at",i
k += 1
self.objects.append(obj)
# rebuild object tree
#for i in range(len(self.objects)):
# self.__resolveObject(i)
# return root object
return self.__resolveObject(self.top_object)
@classmethod
def plistWithString(cls, s):
parser = cls(s)
return parser.parse()
# helpers for testing
def plist(obj):
from Foundation import NSPropertyListSerialization, NSPropertyListBinaryFormat_v1_0
b = NSPropertyListSerialization.dataWithPropertyList_format_options_error_(obj, NSPropertyListBinaryFormat_v1_0, 0, None)
return str(b.bytes())
def unplist(s):
from Foundation import NSData, NSPropertyListSerialization
d = NSData.dataWithBytes_length_(s, len(s))
return NSPropertyListSerialization.propertyListWithData_options_format_error_(d, 0, None, None)
if __name__ == "__main__":
import os
import sys
import json
file_path = sys.argv[1]
with open(file_path, "rb") as fp:
data = fp.read()
out = BPListReader(data).parse()
with open(file_path + ".json", "w") as fp:
json.dump(out, indent=4)

View File

@@ -7,12 +7,9 @@ This is to ensure compatibility when running without a user
ie. during automated patching ie. during automated patching
""" """
import os
import logging import logging
import plistlib import plistlib
from . import subprocess_wrapper
from pathlib import Path from pathlib import Path

View File

@@ -5,7 +5,7 @@ install.py: Installation of OpenCore files to ESP
import logging import logging
import plistlib import plistlib
import subprocess import subprocess
import applescript import re
from pathlib import Path from pathlib import Path
@@ -13,8 +13,6 @@ from . import utilities, subprocess_wrapper
from .. import constants from .. import constants
from ..datasets import os_data
class tui_disk_installation: class tui_disk_installation:
def __init__(self, versions): def __init__(self, versions):
@@ -30,9 +28,15 @@ class tui_disk_installation:
# Sierra and older # Sierra and older
disks = plistlib.loads(subprocess.run(["/usr/sbin/diskutil", "list", "-plist"], stdout=subprocess.PIPE).stdout.decode().strip().encode()) disks = plistlib.loads(subprocess.run(["/usr/sbin/diskutil", "list", "-plist"], stdout=subprocess.PIPE).stdout.decode().strip().encode())
for disk in disks["AllDisksAndPartitions"]: for disk in disks["AllDisksAndPartitions"]:
disk_info = plistlib.loads(subprocess.run(["/usr/sbin/diskutil", "info", "-plist", disk["DeviceIdentifier"]], stdout=subprocess.PIPE).stdout.decode().strip().encode())
try: try:
all_disks[disk["DeviceIdentifier"]] = {"identifier": disk_info["DeviceNode"], "name": disk_info["MediaName"], "size": disk_info["TotalSize"], "partitions": {}} disk_info = plistlib.loads(subprocess.run(["/usr/sbin/diskutil", "info", "-plist", disk["DeviceIdentifier"]], stdout=subprocess.PIPE).stdout.decode().strip().encode())
except:
# Chinesium USB can have garbage data in MediaName
diskutil_output = subprocess.run(["/usr/sbin/diskutil", "info", "-plist", disk["DeviceIdentifier"]], stdout=subprocess.PIPE).stdout.decode().strip()
ungarbafied_output = re.sub(r'(<key>MediaName</key>\s*<string>).*?(</string>)', r'\1\2', diskutil_output).encode()
disk_info = plistlib.loads(ungarbafied_output)
try:
all_disks[disk["DeviceIdentifier"]] = {"identifier": disk_info["DeviceNode"], "name": disk_info.get("MediaName", "Disk"), "size": disk_info["TotalSize"], "partitions": {}}
for partition in disk["Partitions"]: for partition in disk["Partitions"]:
partition_info = plistlib.loads(subprocess.run(["/usr/sbin/diskutil", "info", "-plist", partition["DeviceIdentifier"]], stdout=subprocess.PIPE).stdout.decode().strip().encode()) partition_info = plistlib.loads(subprocess.run(["/usr/sbin/diskutil", "info", "-plist", partition["DeviceIdentifier"]], stdout=subprocess.PIPE).stdout.decode().strip().encode())
all_disks[disk["DeviceIdentifier"]]["partitions"][partition["DeviceIdentifier"]] = { all_disks[disk["DeviceIdentifier"]]["partitions"][partition["DeviceIdentifier"]] = {
@@ -101,7 +105,7 @@ class tui_disk_installation:
partition_info = plistlib.loads(subprocess.run(["/usr/sbin/diskutil", "info", "-plist", full_disk_identifier], stdout=subprocess.PIPE).stdout.decode().strip().encode()) partition_info = plistlib.loads(subprocess.run(["/usr/sbin/diskutil", "info", "-plist", full_disk_identifier], stdout=subprocess.PIPE).stdout.decode().strip().encode())
parent_disk = partition_info["ParentWholeDisk"] parent_disk = partition_info["ParentWholeDisk"]
drive_host_info = plistlib.loads(subprocess.run(["/usr/sbin/diskutil", "info", "-plist", parent_disk], stdout=subprocess.PIPE).stdout.decode().strip().encode()) drive_host_info = plistlib.loads(subprocess.run(["/usr/sbin/diskutil", "info", "-plist", parent_disk], stdout=subprocess.PIPE).stdout.decode().strip().encode())
sd_type = drive_host_info["MediaName"] sd_type = drive_host_info.get("MediaName", "Disk")
try: try:
ssd_type = drive_host_info["SolidState"] ssd_type = drive_host_info["SolidState"]
except KeyError: except KeyError:

View File

@@ -2,7 +2,6 @@
kdk_handler.py: Module for parsing and determining best Kernel Debug Kit for host OS kdk_handler.py: Module for parsing and determining best Kernel Debug Kit for host OS
""" """
import os
import logging import logging
import plistlib import plistlib
import requests import requests
@@ -16,6 +15,7 @@ from pathlib import Path
from .. import constants from .. import constants
from ..datasets import os_data from ..datasets import os_data
from ..volume import generate_copy_arguments
from . import ( from . import (
network_handler, network_handler,
@@ -668,7 +668,7 @@ class KernelDebugKitUtilities:
logging.info("Backup already exists, skipping") logging.info("Backup already exists, skipping")
return return
result = subprocess_wrapper.run_as_root(["/bin/cp", "-R", kdk_path, kdk_dst_path], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) result = subprocess_wrapper.run_as_root(generate_copy_arguments(kdk_path, kdk_dst_path), stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if result.returncode != 0: if result.returncode != 0:
logging.info("Failed to create KDK backup:") logging.info("Failed to create KDK backup:")
subprocess_wrapper.log(result) subprocess_wrapper.log(result)

View File

@@ -18,8 +18,7 @@ from .. import constants
from . import ( from . import (
analytics_handler, analytics_handler,
global_settings, global_settings
subprocess_wrapper
) )

View File

@@ -1,47 +1,29 @@
""" """
macos_installer_handler.py: Handler for macOS installers, both local and remote macos_installer_handler.py: Handler for local macOS installers
""" """
import enum
import logging import logging
import plistlib import plistlib
import tempfile import tempfile
import subprocess import subprocess
import applescript import re
from pathlib import Path from pathlib import Path
from ..datasets import os_data from ..datasets import os_data
from . import ( from . import (
network_handler,
utilities, utilities,
subprocess_wrapper subprocess_wrapper
) )
from ..volume import (
can_copy_on_write,
generate_copy_arguments
)
APPLICATION_SEARCH_PATH: str = "/Applications" APPLICATION_SEARCH_PATH: str = "/Applications"
SFR_SOFTWARE_UPDATE_PATH: str = "SFR/com_apple_MobileAsset_SFRSoftwareUpdate/com_apple_MobileAsset_SFRSoftwareUpdate.xml"
CATALOG_URL_BASE: str = "https://swscan.apple.com/content/catalogs/others/index"
CATALOG_URL_EXTENSION: str = ".merged-1.sucatalog"
CATALOG_URL_VARIANTS: list = [
"15",
"14",
"13",
"12",
"10.16",
"10.15",
"10.14",
"10.13",
"10.12",
"10.11",
"10.10",
"10.9",
"mountainlion",
"lion",
"snowleopard",
"leopard",
]
tmp_dir = tempfile.TemporaryDirectory() tmp_dir = tempfile.TemporaryDirectory()
@@ -113,13 +95,9 @@ class InstallerCreation():
for file in Path(ia_tmp).glob("*"): for file in Path(ia_tmp).glob("*"):
subprocess.run(["/bin/rm", "-rf", str(file)]) subprocess.run(["/bin/rm", "-rf", str(file)])
# Copy installer to tmp (use CoW to avoid extra disk writes) # Copy installer to tmp
args = ["/bin/cp", "-cR", installer_path, ia_tmp] if can_copy_on_write(installer_path, ia_tmp) is False:
if utilities.check_filesystem_type() != "apfs": # Ensure we have enough space for the duplication when CoW is not supported
# HFS+ disks do not support CoW
args[1] = "-R"
# Ensure we have enough space for the duplication
space_available = utilities.get_free_space() space_available = utilities.get_free_space()
space_needed = Path(ia_tmp).stat().st_size space_needed = Path(ia_tmp).stat().st_size
if space_available < space_needed: if space_available < space_needed:
@@ -127,7 +105,7 @@ class InstallerCreation():
logging.info(f"{utilities.human_fmt(space_available)} available, {utilities.human_fmt(space_needed)} required") logging.info(f"{utilities.human_fmt(space_available)} available, {utilities.human_fmt(space_needed)} required")
return False return False
subprocess.run(args) subprocess.run(generate_copy_arguments(installer_path, ia_tmp))
# Adjust installer_path to point to the copied installer # Adjust installer_path to point to the copied installer
installer_path = Path(ia_tmp) / Path(Path(installer_path).name) installer_path = Path(ia_tmp) / Path(Path(installer_path).name)
@@ -193,9 +171,15 @@ fi
disks = plistlib.loads(subprocess.run(["/usr/sbin/diskutil", "list", "-plist"], stdout=subprocess.PIPE).stdout.decode().strip().encode()) disks = plistlib.loads(subprocess.run(["/usr/sbin/diskutil", "list", "-plist"], stdout=subprocess.PIPE).stdout.decode().strip().encode())
for disk in disks["AllDisksAndPartitions"]: for disk in disks["AllDisksAndPartitions"]:
disk_info = plistlib.loads(subprocess.run(["/usr/sbin/diskutil", "info", "-plist", disk["DeviceIdentifier"]], stdout=subprocess.PIPE).stdout.decode().strip().encode())
try: try:
all_disks[disk["DeviceIdentifier"]] = {"identifier": disk_info["DeviceNode"], "name": disk_info["MediaName"], "size": disk_info["TotalSize"], "removable": disk_info["Internal"], "partitions": {}} disk_info = plistlib.loads(subprocess.run(["/usr/sbin/diskutil", "info", "-plist", disk["DeviceIdentifier"]], stdout=subprocess.PIPE).stdout.decode().strip().encode())
except:
# Chinesium USB can have garbage data in MediaName
diskutil_output = subprocess.run(["/usr/sbin/diskutil", "info", "-plist", disk["DeviceIdentifier"]], stdout=subprocess.PIPE).stdout.decode().strip()
ungarbafied_output = re.sub(r'(<key>MediaName</key>\s*<string>).*?(</string>)', r'\1\2', diskutil_output).encode()
disk_info = plistlib.loads(ungarbafied_output)
try:
all_disks[disk["DeviceIdentifier"]] = {"identifier": disk_info["DeviceNode"], "name": disk_info.get("MediaName", "Disk"), "size": disk_info["TotalSize"], "removable": disk_info["Internal"], "partitions": {}}
except KeyError: except KeyError:
# Avoid crashing with CDs installed # Avoid crashing with CDs installed
continue continue
@@ -221,288 +205,6 @@ fi
return list_disks return list_disks
class SeedType(enum.IntEnum):
"""
Enum for catalog types
Variants:
DeveloperSeed: Developer Beta (Part of the Apple Developer Program)
PublicSeed: Public Beta
CustomerSeed: AppleSeed Program (Generally mirrors DeveloperSeed)
PublicRelease: Public Release
"""
DeveloperSeed: int = 0
PublicSeed: int = 1
CustomerSeed: int = 2
PublicRelease: int = 3
class RemoteInstallerCatalog:
"""
Parses Apple's Software Update catalog and finds all macOS installers.
"""
def __init__(self, seed_override: SeedType = SeedType.PublicRelease, os_override: int = os_data.os_data.sonoma) -> None:
self.catalog_url: str = self._construct_catalog_url(seed_override, os_override)
self.available_apps: dict = self._parse_catalog()
self.available_apps_latest: dict = self._list_newest_installers_only()
def _construct_catalog_url(self, seed_type: SeedType, os_kernel: int) -> str:
"""
Constructs the catalog URL based on the seed type
Parameters:
seed_type (SeedType): The seed type to use
Returns:
str: The catalog URL
"""
url: str = CATALOG_URL_BASE
os_version: str = os_data.os_conversion.kernel_to_os(os_kernel)
os_version = "10.16" if os_version == "11" else os_version
if os_version not in CATALOG_URL_VARIANTS:
logging.error(f"OS version {os_version} is not supported, defaulting to latest")
os_version = CATALOG_URL_VARIANTS[0]
url += f"-{os_version}"
if seed_type == SeedType.DeveloperSeed:
url += f"seed"
elif seed_type == SeedType.PublicSeed:
url += f"beta"
elif seed_type == SeedType.CustomerSeed:
url += f"customerseed"
did_find_variant: bool = False
for variant in CATALOG_URL_VARIANTS:
if variant in url:
did_find_variant = True
if did_find_variant:
url += f"-{variant}"
url += f"{CATALOG_URL_EXTENSION}"
return url
def _fetch_catalog(self) -> dict:
"""
Fetches the catalog from Apple's servers
Returns:
dict: The catalog as a dictionary
"""
catalog: dict = {}
if network_handler.NetworkUtilities(self.catalog_url).verify_network_connection() is False:
return catalog
try:
catalog = plistlib.loads(network_handler.NetworkUtilities().get(self.catalog_url).content)
except plistlib.InvalidFileException:
return {}
return catalog
def _parse_catalog(self) -> dict:
"""
Parses the catalog and returns a dictionary of available installers
Returns:
dict: Dictionary of available installers
"""
available_apps: dict = {}
catalog: dict = self._fetch_catalog()
if not catalog:
return available_apps
if "Products" not in catalog:
return available_apps
for product in catalog["Products"]:
if "ExtendedMetaInfo" not in catalog["Products"][product]:
continue
if "Packages" not in catalog["Products"][product]:
continue
if "InstallAssistantPackageIdentifiers" not in catalog["Products"][product]["ExtendedMetaInfo"]:
continue
if "SharedSupport" not in catalog["Products"][product]["ExtendedMetaInfo"]["InstallAssistantPackageIdentifiers"]:
continue
if "BuildManifest" not in catalog["Products"][product]["ExtendedMetaInfo"]["InstallAssistantPackageIdentifiers"]:
continue
for bm_package in catalog["Products"][product]["Packages"]:
if "Info.plist" not in bm_package["URL"]:
continue
if "InstallInfo.plist" in bm_package["URL"]:
continue
try:
build_plist = plistlib.loads(network_handler.NetworkUtilities().get(bm_package["URL"]).content)
except plistlib.InvalidFileException:
continue
if "MobileAssetProperties" not in build_plist:
continue
if "SupportedDeviceModels" not in build_plist["MobileAssetProperties"]:
continue
if "OSVersion" not in build_plist["MobileAssetProperties"]:
continue
if "Build" not in build_plist["MobileAssetProperties"]:
continue
# Ensure Apple Silicon specific Installers are not listed
if "VMM-x86_64" not in build_plist["MobileAssetProperties"]["SupportedDeviceModels"]:
continue
version = build_plist["MobileAssetProperties"]["OSVersion"]
build = build_plist["MobileAssetProperties"]["Build"]
try:
catalog_url = build_plist["MobileAssetProperties"]["BridgeVersionInfo"]["CatalogURL"]
if "beta" in catalog_url:
catalog_url = "PublicSeed"
elif "customerseed" in catalog_url:
catalog_url = "CustomerSeed"
elif "seed" in catalog_url:
catalog_url = "DeveloperSeed"
else:
catalog_url = "Public"
except KeyError:
# Assume Public if no catalog URL is found
catalog_url = "Public"
download_link = None
integrity = None
size = None
date = catalog["Products"][product]["PostDate"]
for ia_package in catalog["Products"][product]["Packages"]:
if "InstallAssistant.pkg" not in ia_package["URL"]:
continue
if "URL" not in ia_package:
continue
if "IntegrityDataURL" not in ia_package:
continue
download_link = ia_package["URL"]
integrity = ia_package["IntegrityDataURL"]
size = ia_package["Size"] if ia_package["Size"] else 0
if any([version, build, download_link, size, integrity]) is None:
continue
available_apps.update({
product: {
"Version": version,
"Build": build,
"Link": download_link,
"Size": size,
"integrity": integrity,
"Source": "Apple Inc.",
"Variant": catalog_url,
"OS": os_data.os_conversion.os_to_kernel(version),
"Models": build_plist["MobileAssetProperties"]["SupportedDeviceModels"],
"Date": date
}
})
available_apps = {k: v for k, v in sorted(available_apps.items(), key=lambda x: x[1]['Version'])}
return available_apps
def _list_newest_installers_only(self) -> dict:
"""
Returns a dictionary of the newest macOS installers only.
Primarily used to avoid overwhelming the user with a list of
installers that are not the newest version.
Returns:
dict: A dictionary of the newest macOS installers only.
"""
if self.available_apps is None:
return {}
newest_apps: dict = self.available_apps.copy()
supported_versions = ["10.13", "10.14", "10.15", "11", "12", "13", "14"]
for version in supported_versions:
remote_version_minor = 0
remote_version_security = 0
os_builds = []
# First determine the largest version
for ia in newest_apps:
if newest_apps[ia]["Version"].startswith(version):
if newest_apps[ia]["Variant"] not in ["CustomerSeed", "DeveloperSeed", "PublicSeed"]:
remote_version = newest_apps[ia]["Version"].split(".")
if remote_version[0] == "10":
remote_version.pop(0)
remote_version.pop(0)
else:
remote_version.pop(0)
if int(remote_version[0]) > remote_version_minor:
remote_version_minor = int(remote_version[0])
remote_version_security = 0 # Reset as new minor version found
if len(remote_version) > 1:
if int(remote_version[1]) > remote_version_security:
remote_version_security = int(remote_version[1])
# Now remove all versions that are not the largest
for ia in list(newest_apps):
# Don't use Beta builds to determine latest version
if newest_apps[ia]["Variant"] in ["CustomerSeed", "DeveloperSeed", "PublicSeed"]:
continue
if newest_apps[ia]["Version"].startswith(version):
remote_version = newest_apps[ia]["Version"].split(".")
if remote_version[0] == "10":
remote_version.pop(0)
remote_version.pop(0)
else:
remote_version.pop(0)
if int(remote_version[0]) < remote_version_minor:
newest_apps.pop(ia)
continue
if int(remote_version[0]) == remote_version_minor:
if len(remote_version) > 1:
if int(remote_version[1]) < remote_version_security:
newest_apps.pop(ia)
continue
else:
if remote_version_security > 0:
newest_apps.pop(ia)
continue
# Remove duplicate builds
# ex. macOS 12.5.1 has 2 builds in the Software Update Catalog
# ref: https://twitter.com/classicii_mrmac/status/1560357471654379522
if newest_apps[ia]["Build"] in os_builds:
newest_apps.pop(ia)
continue
os_builds.append(newest_apps[ia]["Build"])
# Remove Betas if there's a non-beta version available
for ia in list(newest_apps):
if newest_apps[ia]["Variant"] in ["CustomerSeed", "DeveloperSeed", "PublicSeed"]:
for ia2 in newest_apps:
if newest_apps[ia2]["Version"].split(".")[0] == newest_apps[ia]["Version"].split(".")[0] and newest_apps[ia2]["Variant"] not in ["CustomerSeed", "DeveloperSeed", "PublicSeed"]:
newest_apps.pop(ia)
break
return newest_apps
class LocalInstallerCatalog: class LocalInstallerCatalog:
""" """
Finds all macOS installers on the local machine. Finds all macOS installers on the local machine.
@@ -641,9 +343,15 @@ class LocalInstallerCatalog:
if output.returncode != 0: if output.returncode != 0:
return (detected_build, detected_os) return (detected_build, detected_os)
ss_info = Path(SFR_SOFTWARE_UPDATE_PATH)
if Path(tmpdir / ss_info).exists(): ss_info_files = [
Path("SFR/com_apple_MobileAsset_SFRSoftwareUpdate/com_apple_MobileAsset_SFRSoftwareUpdate.xml"),
Path("com_apple_MobileAsset_MacSoftwareUpdate/com_apple_MobileAsset_MacSoftwareUpdate.xml")
]
for ss_info in ss_info_files:
if not Path(tmpdir / ss_info).exists():
continue
plist = plistlib.load((tmpdir / ss_info).open("rb")) plist = plistlib.load((tmpdir / ss_info).open("rb"))
if "Assets" in plist: if "Assets" in plist:
if "Build" in plist["Assets"][0]: if "Build" in plist["Assets"][0]:

View File

@@ -3,7 +3,6 @@ subprocess_wrapper.py: Wrapper for subprocess module to better handle errors and
Additionally handles our Privileged Helper Tool Additionally handles our Privileged Helper Tool
""" """
import os
import enum import enum
import logging import logging
import subprocess import subprocess

View File

@@ -0,0 +1,17 @@
"""
auto_patcher: Automatic system volume patching after updates, etc.
Usage:
>>> # Installing launch services
>>> from auto_patcher import InstallAutomaticPatchingServices
>>> InstallAutomaticPatchingServices(self.constants).install_auto_patcher_launch_agent()
>>> # When patching the system volume (ex. launch service)
>>> from auto_patcher import StartAutomaticPatching
>>> StartAutomaticPatching(self.constants).start_auto_patch()
"""
from .install import InstallAutomaticPatchingServices
from .start import StartAutomaticPatching

View File

@@ -0,0 +1,116 @@
"""
install.py: Install the auto patcher launch services
"""
import hashlib
import logging
import plistlib
import subprocess
from pathlib import Path
from ... import constants
from ...volume import generate_copy_arguments
from ...support import (
utilities,
subprocess_wrapper
)
class InstallAutomaticPatchingServices:
"""
Install the auto patcher launch services
"""
def __init__(self, global_constants: constants.Constants):
self.constants: constants.Constants = global_constants
def install_auto_patcher_launch_agent(self, kdk_caching_needed: bool = False):
"""
Install patcher launch services
See start_auto_patch() comments for more info
"""
if self.constants.launcher_script is not None:
logging.info("- Skipping Auto Patcher Launch Agent, not supported when running from source")
return
services = {
self.constants.auto_patch_launch_agent_path: "/Library/LaunchAgents/com.dortania.opencore-legacy-patcher.auto-patch.plist",
self.constants.update_launch_daemon_path: "/Library/LaunchDaemons/com.dortania.opencore-legacy-patcher.macos-update.plist",
**({ self.constants.rsr_monitor_launch_daemon_path: "/Library/LaunchDaemons/com.dortania.opencore-legacy-patcher.rsr-monitor.plist" } if self._create_rsr_monitor_daemon() else {}),
**({ self.constants.kdk_launch_daemon_path: "/Library/LaunchDaemons/com.dortania.opencore-legacy-patcher.os-caching.plist" } if kdk_caching_needed is True else {} ),
}
for service in services:
name = Path(service).name
logging.info(f"- Installing {name}")
if Path(services[service]).exists():
if hashlib.sha256(open(service, "rb").read()).hexdigest() == hashlib.sha256(open(services[service], "rb").read()).hexdigest():
logging.info(f" - {name} checksums match, skipping")
continue
logging.info(f" - Existing service found, removing")
subprocess_wrapper.run_as_root_and_verify(["/bin/rm", services[service]], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# Create parent directories
if not Path(services[service]).parent.exists():
logging.info(f" - Creating {Path(services[service]).parent} directory")
subprocess_wrapper.run_as_root_and_verify(["/bin/mkdir", "-p", Path(services[service]).parent], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
subprocess_wrapper.run_as_root_and_verify(generate_copy_arguments(service, services[service]), stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# Set the permissions on the service
subprocess_wrapper.run_as_root_and_verify(["/bin/chmod", "644", services[service]], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
subprocess_wrapper.run_as_root_and_verify(["/usr/sbin/chown", "root:wheel", services[service]], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
def _create_rsr_monitor_daemon(self) -> bool:
# Get kext list in /Library/Extensions that have the 'GPUCompanionBundles' property
# This is used to determine if we need to run the RSRMonitor
logging.info("- Checking if RSRMonitor is needed")
cryptex_path = f"/System/Volumes/Preboot/{utilities.get_preboot_uuid()}/cryptex1/current/OS.dmg"
if not Path(cryptex_path).exists():
logging.info("- No OS.dmg, skipping RSRMonitor")
return False
kexts = []
for kext in Path("/Library/Extensions").glob("*.kext"):
if not Path(f"{kext}/Contents/Info.plist").exists():
continue
try:
kext_plist = plistlib.load(open(f"{kext}/Contents/Info.plist", "rb"))
except Exception as e:
logging.info(f" - Failed to load plist for {kext.name}: {e}")
continue
if "GPUCompanionBundles" not in kext_plist:
continue
logging.info(f" - Found kext with GPUCompanionBundles: {kext.name}")
kexts.append(kext.name)
# If we have no kexts, we don't need to run the RSRMonitor
if not kexts:
logging.info("- No kexts found with GPUCompanionBundles, skipping RSRMonitor")
return False
# Load the RSRMonitor plist
rsr_monitor_plist = plistlib.load(open(self.constants.rsr_monitor_launch_daemon_path, "rb"))
arguments = ["/bin/rm", "-Rfv"]
arguments += [f"/Library/Extensions/{kext}" for kext in kexts]
# Add the arguments to the RSRMonitor plist
rsr_monitor_plist["ProgramArguments"] = arguments
# Next add monitoring for '/System/Volumes/Preboot/{UUID}/cryptex1/OS.dmg'
logging.info(f" - Adding monitor: {cryptex_path}")
rsr_monitor_plist["WatchPaths"] = [
cryptex_path,
]
# Write the RSRMonitor plist
plistlib.dump(rsr_monitor_plist, Path(self.constants.rsr_monitor_launch_daemon_path).open("wb"))
return True

View File

@@ -1,11 +1,10 @@
""" """
sys_patch_auto.py: Library of functions for launch services, including automatic patching start.py: Start automatic patching of host
""" """
import wx import wx
import wx.html2 import wx.html2
import hashlib
import logging import logging
import plistlib import plistlib
import requests import requests
@@ -13,30 +12,27 @@ import markdown2
import subprocess import subprocess
import webbrowser import webbrowser
from pathlib import Path from ..detections import DetectRootPatch
from . import sys_patch_detect from ... import constants
from .. import constants from ...datasets import css_data
from ..datasets import css_data from ...wx_gui import (
from ..wx_gui import (
gui_entry, gui_entry,
gui_support gui_support
) )
from ..support import ( from ...support import (
utilities, utilities,
updates, updates,
global_settings, global_settings,
network_handler, network_handler,
subprocess_wrapper
) )
class AutomaticSysPatch: class StartAutomaticPatching:
""" """
Library of functions for launch agent, including automatic patching Start automatic patching of host
""" """
def __init__(self, global_constants: constants.Constants): def __init__(self, global_constants: constants.Constants):
@@ -146,9 +142,9 @@ Please check the Github page for more information about this release."""
if utilities.check_seal() is True: if utilities.check_seal() is True:
logging.info("- Detected Snapshot seal intact, detecting patches") logging.info("- Detected Snapshot seal intact, detecting patches")
patches = sys_patch_detect.DetectRootPatch(self.constants.computer.real_model, self.constants).detect_patch_set() patches = DetectRootPatch(self.constants.computer.real_model, self.constants).detect_patch_set()
if not any(not patch.startswith("Settings") and not patch.startswith("Validation") and patches[patch] is True for patch in patches): if not any(not patch.startswith("Settings") and not patch.startswith("Validation") and patches[patch] is True for patch in patches):
patches = [] patches = {}
if patches: if patches:
logging.info("- Detected applicable patches, determining whether possible to patch") logging.info("- Detected applicable patches, determining whether possible to patch")
if patches["Validation: Patching Possible"] is False: if patches["Validation: Patching Possible"] is False:
@@ -316,92 +312,4 @@ Please check the Github page for more information about this release."""
gui_entry.EntryPoint(self.constants).start(entry=gui_entry.SupportedEntryPoints.BUILD_OC) gui_entry.EntryPoint(self.constants).start(entry=gui_entry.SupportedEntryPoints.BUILD_OC)
except KeyError: except KeyError:
logging.info("- Unable to determine if boot disk is removable, skipping prompt") logging.info("- Unable to determine if boot disk is removable, skipping prompt")
def install_auto_patcher_launch_agent(self, kdk_caching_needed: bool = False):
"""
Install patcher launch services
See start_auto_patch() comments for more info
"""
if self.constants.launcher_script is not None:
logging.info("- Skipping Auto Patcher Launch Agent, not supported when running from source")
return
services = {
self.constants.auto_patch_launch_agent_path: "/Library/LaunchAgents/com.dortania.opencore-legacy-patcher.auto-patch.plist",
self.constants.update_launch_daemon_path: "/Library/LaunchDaemons/com.dortania.opencore-legacy-patcher.macos-update.plist",
**({ self.constants.rsr_monitor_launch_daemon_path: "/Library/LaunchDaemons/com.dortania.opencore-legacy-patcher.rsr-monitor.plist" } if self._create_rsr_monitor_daemon() else {}),
**({ self.constants.kdk_launch_daemon_path: "/Library/LaunchDaemons/com.dortania.opencore-legacy-patcher.os-caching.plist" } if kdk_caching_needed is True else {} ),
}
for service in services:
name = Path(service).name
logging.info(f"- Installing {name}")
if Path(services[service]).exists():
if hashlib.sha256(open(service, "rb").read()).hexdigest() == hashlib.sha256(open(services[service], "rb").read()).hexdigest():
logging.info(f" - {name} checksums match, skipping")
continue
logging.info(f" - Existing service found, removing")
subprocess_wrapper.run_as_root_and_verify(["/bin/rm", services[service]], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# Create parent directories
if not Path(services[service]).parent.exists():
logging.info(f" - Creating {Path(services[service]).parent} directory")
subprocess_wrapper.run_as_root_and_verify(["/bin/mkdir", "-p", Path(services[service]).parent], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
subprocess_wrapper.run_as_root_and_verify(["/bin/cp", service, services[service]], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# Set the permissions on the service
subprocess_wrapper.run_as_root_and_verify(["/bin/chmod", "644", services[service]], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
subprocess_wrapper.run_as_root_and_verify(["/usr/sbin/chown", "root:wheel", services[service]], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
def _create_rsr_monitor_daemon(self) -> bool:
# Get kext list in /Library/Extensions that have the 'GPUCompanionBundles' property
# This is used to determine if we need to run the RSRMonitor
logging.info("- Checking if RSRMonitor is needed")
cryptex_path = f"/System/Volumes/Preboot/{utilities.get_preboot_uuid()}/cryptex1/current/OS.dmg"
if not Path(cryptex_path).exists():
logging.info("- No OS.dmg, skipping RSRMonitor")
return False
kexts = []
for kext in Path("/Library/Extensions").glob("*.kext"):
if not Path(f"{kext}/Contents/Info.plist").exists():
continue
try:
kext_plist = plistlib.load(open(f"{kext}/Contents/Info.plist", "rb"))
except Exception as e:
logging.info(f" - Failed to load plist for {kext.name}: {e}")
continue
if "GPUCompanionBundles" not in kext_plist:
continue
logging.info(f" - Found kext with GPUCompanionBundles: {kext.name}")
kexts.append(kext.name)
# If we have no kexts, we don't need to run the RSRMonitor
if not kexts:
logging.info("- No kexts found with GPUCompanionBundles, skipping RSRMonitor")
return False
# Load the RSRMonitor plist
rsr_monitor_plist = plistlib.load(open(self.constants.rsr_monitor_launch_daemon_path, "rb"))
arguments = ["/bin/rm", "-Rfv"]
arguments += [f"/Library/Extensions/{kext}" for kext in kexts]
# Add the arguments to the RSRMonitor plist
rsr_monitor_plist["ProgramArguments"] = arguments
# Next add monitoring for '/System/Volumes/Preboot/{UUID}/cryptex1/OS.dmg'
logging.info(f" - Adding monitor: {cryptex_path}")
rsr_monitor_plist["WatchPaths"] = [
cryptex_path,
]
# Write the RSRMonitor plist
plistlib.dump(rsr_monitor_plist, Path(self.constants.rsr_monitor_launch_daemon_path).open("wb"))
return True

View File

@@ -0,0 +1,5 @@
"""
detections: Detect and generate patch sets for the host
"""
from .detect import DetectRootPatch
from .generate import GenerateRootPatchSets

View File

@@ -1,5 +1,5 @@
""" """
sys_patch_detect.py: Hardware Detection Logic for Root Patching detect.py: Hardware Detection Logic for Root Patching
""" """
import logging import logging
@@ -9,18 +9,18 @@ import packaging.version
from pathlib import Path from pathlib import Path
from .. import constants from ... import constants
from ..detections import ( from ...detections import (
amfi_detect, amfi_detect,
device_probe device_probe
) )
from ..support import ( from ...support import (
kdk_handler, kdk_handler,
network_handler, network_handler,
utilities utilities
) )
from ..datasets import ( from ...datasets import (
cpu_data, cpu_data,
model_array, model_array,
os_data, os_data,

View File

@@ -1,14 +1,14 @@
""" """
sys_patch_generate.py: Class for generating patch sets for the current host generate.py: Class for generating patch sets for the current host
""" """
import logging import logging
from .. import constants from ... import constants
from ..datasets import sys_patch_dict from ...datasets import sys_patch_dict
from ..support import utilities from ...support import utilities
from ..detections import device_probe from ...detections import device_probe
class GenerateRootPatchSets: class GenerateRootPatchSets:

View File

@@ -0,0 +1,11 @@
"""
kernelcache: Library for rebuilding macOS kernelcache files.
Usage:
>>> from kernelcache import RebuildKernelCache
>>> RebuildKernelCache(os_version, mount_location, auxiliary_cache, auxiliary_cache_only).rebuild()
"""
from .rebuild import RebuildKernelCache
from .kernel_collection.support import KernelCacheSupport

View File

@@ -0,0 +1,8 @@
"""
cache.py: Base class for kernel cache management
"""
class BaseKernelCache:
def rebuild(self) -> None:
raise NotImplementedError("To be implemented in subclass")

View File

@@ -0,0 +1,72 @@
"""
auxiliary.py: Auxiliary Kernel Collection management
"""
import logging
import subprocess
from ..base.cache import BaseKernelCache
from ....support import subprocess_wrapper
class AuxiliaryKernelCollection(BaseKernelCache):
def __init__(self, mount_location: str) -> None:
self.mount_location = mount_location
def _kmutil_arguments(self) -> list[str]:
args = ["/usr/bin/kmutil", "create", "--allow-missing-kdk"]
args.append("--new")
args.append("aux")
args.append("--boot-path")
args.append(f"{self.mount_location}/System/Library/KernelCollections/BootKernelExtensions.kc")
args.append("--system-path")
args.append(f"{self.mount_location}/System/Library/KernelCollections/SystemKernelExtensions.kc")
return args
def _force_auxiliary_usage(self) -> bool:
"""
Force the auxiliary kernel collection to be used.
This is required as Apple doesn't offer a public way
to rebuild the auxiliary kernel collection. Instead deleting
necessary files and directories will force the newly built
collection to be used.
"""
print("- Forcing Auxiliary Kernel Collection usage")
result = subprocess_wrapper.run_as_root(["/usr/bin/killall", "syspolicyd", "kernelmanagerd"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if result.returncode != 0:
logging.info("- Unable to kill syspolicyd and kernelmanagerd")
subprocess_wrapper.log(result)
return False
for file in ["KextPolicy", "KextPolicy-shm", "KextPolicy-wal"]:
result = subprocess_wrapper.run_as_root(["/bin/rm", f"/private/var/db/SystemPolicyConfiguration/{file}"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if result.returncode != 0:
logging.info(f"- Unable to remove {file}")
subprocess_wrapper.log(result)
return False
return True
def rebuild(self) -> None:
logging.info("- Building new Auxiliary Kernel Collection")
result = subprocess_wrapper.run_as_root(self._kmutil_arguments(), stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if result.returncode != 0:
logging.info("- Unable to build Auxiliary Kernel Collection")
subprocess_wrapper.log(result)
return False
if self._force_auxiliary_usage() is False:
return False
return True

View File

@@ -0,0 +1,62 @@
"""
boot_system.py: Boot and System Kernel Collection management
"""
import logging
import subprocess
from ..base.cache import BaseKernelCache
from ....support import subprocess_wrapper
from ....datasets import os_data
class BootSystemKernelCollections(BaseKernelCache):
def __init__(self, mount_location: str, detected_os: int, auxiliary_kc: bool) -> None:
self.mount_location = mount_location
self.detected_os = detected_os
self.auxiliary_kc = auxiliary_kc
def _kmutil_arguments(self) -> list[str]:
"""
Generate kmutil arguments for creating or updating
the boot, system and auxiliary kernel collections
"""
args = ["/usr/bin/kmutil"]
if self.detected_os >= os_data.os_data.ventura:
args.append("create")
args.append("--allow-missing-kdk")
else:
args.append("install")
args.append("--volume-root")
args.append(self.mount_location)
args.append("--update-all")
args.append("--variant-suffix")
args.append("release")
if self.auxiliary_kc is True:
# Following arguments are supposed to skip kext consent
# prompts when creating auxiliary KCs with SIP disabled
args.append("--no-authentication")
args.append("--no-authorization")
return args
def rebuild(self) -> bool:
logging.info(f"- Rebuilding {'Boot and System' if self.auxiliary_kc is False else 'Boot, System and Auxiliary'} Kernel Collections")
if self.auxiliary_kc is True:
logging.info(" (You will get a prompt by System Preferences, ignore for now)")
result = subprocess_wrapper.run_as_root(self._kmutil_arguments(), stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if result.returncode != 0:
subprocess_wrapper.log(result)
return False
return True

View File

@@ -0,0 +1,163 @@
"""
support.py: Kernel Cache support functions
"""
import logging
import plistlib
from pathlib import Path
from datetime import datetime
from ....datasets import os_data
from ....support import subprocess_wrapper
class KernelCacheSupport:
def __init__(self, mount_location_data: str, detected_os: int, skip_root_kmutil_requirement: bool) -> None:
self.mount_location_data = mount_location_data
self.detected_os = detected_os
self.skip_root_kmutil_requirement = skip_root_kmutil_requirement
def check_kexts_needs_authentication(self, kext_name: str) -> bool:
"""
Verify whether the user needs to authenticate in System Preferences
Sets 'needs_to_open_preferences' to True if the kext is not in the AuxKC
Logic:
Under 'private/var/db/KernelManagement/AuxKC/CurrentAuxKC/com.apple.kcgen.instructions.plist'
["kextsToBuild"][i]:
["bundlePathMainOS"] = /Library/Extensions/Test.kext
["cdHash"] = Bundle's CDHash (random on ad-hoc signed, static on dev signed)
["teamID"] = Team ID (blank on ad-hoc signed)
To grab the CDHash of a kext, run 'codesign -dvvv <kext_path>'
"""
if not kext_name.endswith(".kext"):
return False
try:
aux_cache_path = Path(self.mount_location_data) / Path("/private/var/db/KernelExtensionManagement/AuxKC/CurrentAuxKC/com.apple.kcgen.instructions.plist")
if aux_cache_path.exists():
aux_cache_data = plistlib.load((aux_cache_path).open("rb"))
for kext in aux_cache_data["kextsToBuild"]:
if "bundlePathMainOS" in aux_cache_data["kextsToBuild"][kext]:
if aux_cache_data["kextsToBuild"][kext]["bundlePathMainOS"] == f"/Library/Extensions/{kext_name}":
return False
except PermissionError:
pass
logging.info(f" - {kext_name} requires authentication in System Preferences")
return True
def add_auxkc_support(self, install_file: str, source_folder_path: str, install_patch_directory: str, destination_folder_path: str) -> str:
"""
Patch provided Kext to support Auxiliary Kernel Collection
Logic:
In macOS Ventura, KDKs are required to build new Boot and System KCs
However for some patch sets, we're able to use the Auxiliary KCs with '/Library/Extensions'
kernelmanagerd determines which kext is installed by their 'OSBundleRequired' entry
If a kext is labeled as 'OSBundleRequired: Root' or 'OSBundleRequired: Safe Boot',
kernelmanagerd will require the kext to be installed in the Boot/SysKC
Additionally, kexts starting with 'com.apple.' are not natively allowed to be installed
in the AuxKC. So we need to explicitly set our 'OSBundleRequired' to 'Auxiliary'
Parameters:
install_file (str): Kext file name
source_folder_path (str): Source folder path
install_patch_directory (str): Patch directory
destination_folder_path (str): Destination folder path
Returns:
str: Updated destination folder path
"""
if self.skip_root_kmutil_requirement is False:
return destination_folder_path
if not install_file.endswith(".kext"):
return destination_folder_path
if install_patch_directory != "/System/Library/Extensions":
return destination_folder_path
if self.detected_os < os_data.os_data.ventura:
return destination_folder_path
updated_install_location = str(self.mount_location_data) + "/Library/Extensions"
logging.info(f" - Adding AuxKC support to {install_file}")
plist_path = Path(Path(source_folder_path) / Path(install_file) / Path("Contents/Info.plist"))
plist_data = plistlib.load((plist_path).open("rb"))
# Check if we need to update the 'OSBundleRequired' entry
if not plist_data["CFBundleIdentifier"].startswith("com.apple."):
return updated_install_location
if "OSBundleRequired" in plist_data:
if plist_data["OSBundleRequired"] == "Auxiliary":
return updated_install_location
plist_data["OSBundleRequired"] = "Auxiliary"
plistlib.dump(plist_data, plist_path.open("wb"))
return updated_install_location
def clean_auxiliary_kc(self) -> None:
"""
Clean the Auxiliary Kernel Collection
Logic:
When reverting root volume patches, the AuxKC will still retain the UUID
it was built against. Thus when Boot/SysKC are reverted, Aux will break
To resolve this, delete all installed kexts in /L*/E* and rebuild the AuxKC
We can verify our binaries based off the OpenCore-Legacy-Patcher.plist file
"""
if self.detected_os < os_data.os_data.big_sur:
return
logging.info("- Cleaning Auxiliary Kernel Collection")
oclp_path = "/System/Library/CoreServices/OpenCore-Legacy-Patcher.plist"
if Path(oclp_path).exists():
oclp_plist_data = plistlib.load(Path(oclp_path).open("rb"))
for key in oclp_plist_data:
if isinstance(oclp_plist_data[key], (bool, int)):
continue
for install_type in ["Install", "Install Non-Root"]:
if install_type not in oclp_plist_data[key]:
continue
for location in oclp_plist_data[key][install_type]:
if not location.endswith("Extensions"):
continue
for file in oclp_plist_data[key][install_type][location]:
if not file.endswith(".kext"):
continue
if not Path(f"/Library/Extensions/{file}").exists():
continue
logging.info(f" - Removing {file}")
subprocess_wrapper.run_as_root(["/bin/rm", "-Rf", f"/Library/Extensions/{file}"])
# Handle situations where users migrated from older OSes with a lot of garbage in /L*/E*
# ex. Nvidia Web Drivers, NetUSB, dosdude1's patches, etc.
# Move if file's age is older than October 2021 (year before Ventura)
if self.detected_os < os_data.os_data.ventura:
return
relocation_path = "/Library/Relocated Extensions"
if not Path(relocation_path).exists():
subprocess_wrapper.run_as_root(["/bin/mkdir", relocation_path])
for file in Path("/Library/Extensions").glob("*.kext"):
try:
if datetime.fromtimestamp(file.stat().st_mtime) < datetime(2021, 10, 1):
logging.info(f" - Relocating {file.name} kext to {relocation_path}")
if Path(relocation_path) / Path(file.name).exists():
subprocess_wrapper.run_as_root(["/bin/rm", "-Rf", relocation_path / Path(file.name)])
subprocess_wrapper.run_as_root(["/bin/mv", file, relocation_path])
except:
# Some users have the most cursed /L*/E* folders
# ex. Symlinks pointing to symlinks pointing to dead files
pass

View File

@@ -0,0 +1,32 @@
"""
mkext.py: MKext cache management
"""
import logging
import subprocess
from ..base.cache import BaseKernelCache
from ....support import subprocess_wrapper
class MKext(BaseKernelCache):
def __init__(self, mount_location: str) -> None:
self.mount_location = mount_location
def _mkext_arguments(self) -> list[str]:
args = ["/usr/bin/touch", f"{self.mount_location}/System/Library/Extensions"]
return args
def rebuild(self) -> None:
logging.info("- Rebuilding MKext cache")
result = subprocess_wrapper.run_as_root(self._mkext_arguments(), stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if result.returncode != 0:
subprocess_wrapper.log(result)
return False
return True

View File

@@ -0,0 +1,48 @@
"""
prelinked.py: Prelinked Kernel cache management
"""
import logging
import subprocess
from pathlib import Path
from ..base.cache import BaseKernelCache
from ....support import subprocess_wrapper
class PrelinkedKernel(BaseKernelCache):
def __init__(self, mount_location: str) -> None:
self.mount_location = mount_location
def _kextcache_arguments(self) -> list[str]:
args = ["/usr/sbin/kextcache", "-invalidate", f"{self.mount_location}/"]
return args
def _update_preboot_kernel_cache(self) -> bool:
"""
Ensure Preboot volume's kernel cache is updated
"""
if not Path("/usr/sbin/kcditto").exists():
return
logging.info("- Syncing Kernel Cache to Preboot")
subprocess_wrapper.run_as_root_and_verify(["/usr/sbin/kcditto"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
def rebuild(self) -> None:
logging.info("- Rebuilding Prelinked Kernel")
result = subprocess_wrapper.run_as_root(self._kextcache_arguments(), stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# kextcache notes:
# - kextcache always returns 0, even if it fails
# - Check the output for 'KernelCache ID' to see if the cache was successfully rebuilt
if "KernelCache ID" not in result.stdout.decode():
subprocess_wrapper.log(result)
return False
self._update_preboot_kernel_cache()
return True

View File

@@ -0,0 +1,51 @@
"""
rebuild.py: Manage kernel cache rebuilding regardless of macOS version
"""
from .base.cache import BaseKernelCache
from ...datasets import os_data
class RebuildKernelCache:
"""
RebuildKernelCache: Rebuild the kernel cache
Parameters:
- os_version: macOS version
- mount_location: Path to the mounted volume
- auxiliary_cache: Whether to create auxiliary kernel cache (Big Sur and later)
- auxiliary_cache_only: Whether to only create auxiliary kernel cache (Ventura and later)
"""
def __init__(self, os_version: os_data.os_data, mount_location: str, auxiliary_cache: bool, auxiliary_cache_only: bool) -> None:
self.os_version = os_version
self.mount_location = mount_location
self.auxiliary_cache = auxiliary_cache
self.auxiliary_cache_only = auxiliary_cache_only
def _rebuild_method(self) -> BaseKernelCache:
"""
Determine the correct method to rebuild the kernel cache
"""
if self.os_version >= os_data.os_data.big_sur:
if self.os_version >= os_data.os_data.ventura:
if self.auxiliary_cache_only:
from .kernel_collection.auxiliary import AuxiliaryKernelCollection
return AuxiliaryKernelCollection(self.mount_location)
from .kernel_collection.boot_system import BootSystemKernelCollections
return BootSystemKernelCollections(self.mount_location, self.os_version, self.auxiliary_cache)
if os_data.os_data.catalina >= self.os_version >= os_data.os_data.lion:
from .prelinked.prelinked import PrelinkedKernel
return PrelinkedKernel(self.mount_location)
from .mkext.mkext import MKext
return MKext(self.mount_location)
def rebuild(self) -> bool:
"""
Rebuild the kernel cache
"""
return self._rebuild_method().rebuild()

View File

@@ -0,0 +1,16 @@
"""
mount: Library for mounting and unmounting the root volume and interacting with APFS snapshots.
Usage:
>>> from mount import RootVolumeMount
>>> RootVolumeMount(xnu_major).mount()
'/System/Volumes/Update/mnt1'
>>> RootVolumeMount(xnu_major).unmount()
>>> RootVolumeMount(xnu_major).create_snapshot()
>>> RootVolumeMount(xnu_major).revert_snapshot()
"""
from .mount import RootVolumeMount
from .snapshot import APFSSnapshot

View File

@@ -1,62 +1,28 @@
""" """
sys_patch_mount.py: Handling macOS root volume mounting and unmounting, mount.py: Handling macOS root volume mounting and unmounting
as well as APFS snapshots for Big Sur and newer
""" """
import logging import logging
import plistlib import plistlib
import platform
import subprocess import subprocess
from pathlib import Path from pathlib import Path
from ..datasets import os_data from .snapshot import APFSSnapshot
from ..support import subprocess_wrapper
from ...datasets import os_data
from ...support import subprocess_wrapper
class SysPatchMount: class RootVolumeMount:
def __init__(self, xnu_major: int, rosetta_status: bool) -> None: def __init__(self, xnu_major: int) -> None:
self.xnu_major = xnu_major self.xnu_major = xnu_major
self.rosetta_status = rosetta_status
self.root_volume_identifier = self._fetch_root_volume_identifier() self.root_volume_identifier = self._fetch_root_volume_identifier()
self.mount_path = None self.mount_path = None
def mount(self) -> str:
"""
Mount the root volume.
Returns the path to the root volume.
If none, failed to mount.
"""
result = self._mount_root_volume()
if result is None:
logging.error("Failed to mount root volume")
return None
if not Path(result).exists():
logging.error(f"Attempted to mount root volume, but failed: {result}")
return None
self.mount_path = result
return result
def unmount(self, ignore_errors: bool = True) -> bool:
"""
Unmount the root volume.
Returns True if successful, False otherwise.
Note for Big Sur and newer, a snapshot is created before unmounting.
And that unmounting is not critical to the process.
"""
return self._unmount_root_volume(ignore_errors=ignore_errors)
def _fetch_root_volume_identifier(self) -> str: def _fetch_root_volume_identifier(self) -> str:
""" """
Resolve path to disk identifier Resolve path to disk identifier
@@ -136,43 +102,48 @@ class SysPatchMount:
return True return True
def mount(self) -> str:
"""
Mount the root volume.
Returns the path to the root volume.
If none, failed to mount.
"""
result = self._mount_root_volume()
if result is None:
logging.error("Failed to mount root volume")
return None
if not Path(result).exists():
logging.error(f"Attempted to mount root volume, but failed: {result}")
return None
self.mount_path = result
return result
def unmount(self, ignore_errors: bool = True) -> bool:
"""
Unmount the root volume.
Returns True if successful, False otherwise.
Note for Big Sur and newer, a snapshot is created before unmounting.
And that unmounting is not critical to the process.
"""
return self._unmount_root_volume(ignore_errors=ignore_errors)
def create_snapshot(self) -> bool: def create_snapshot(self) -> bool:
""" """
Create APFS snapshot of the root volume. Create APFS snapshot of the root volume.
""" """
if self.xnu_major < os_data.os_data.big_sur.value: return APFSSnapshot(self.xnu_major, self.mount_path).create_snapshot()
return True
args = ["/usr/sbin/bless"]
if platform.machine() == "arm64" or self.rosetta_status is True:
args += ["--mount", self.mount_path, "--create-snapshot"]
else:
args += ["--folder", f"{self.mount_path}/System/Library/CoreServices", "--bootefi", "--create-snapshot"]
result = subprocess_wrapper.run_as_root(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if result.returncode != 0:
logging.error("Failed to create APFS snapshot")
subprocess_wrapper.log(result)
if "Can't use last-sealed-snapshot or create-snapshot on non system volume" in result.stdout.decode():
logging.info("- This is an APFS bug with Monterey and newer! Perform a clean installation to ensure your APFS volume is built correctly")
return False
return True
def revert_snapshot(self) -> bool: def revert_snapshot(self) -> bool:
""" """
Revert APFS snapshot of the root volume. Revert APFS snapshot of the root volume.
""" """
if self.xnu_major < os_data.os_data.big_sur.value: return APFSSnapshot(self.xnu_major, self.mount_path).revert_snapshot()
return True
result = subprocess_wrapper.run_as_root(["/usr/sbin/bless", "--mount", self.mount_path, "--bootefi", "--last-sealed-snapshot"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if result.returncode != 0:
logging.error("Failed to revert APFS snapshot")
subprocess_wrapper.log(result)
return False
return True

View File

@@ -0,0 +1,69 @@
"""
snapshot.py: Handling APFS snapshots
"""
import logging
import platform
import subprocess
from ...datasets import os_data
from ...support import subprocess_wrapper
class APFSSnapshot:
def __init__(self, xnu_major: int, mount_path: str):
self.xnu_major = xnu_major
self.mount_path = mount_path
def _rosetta_status(self) -> bool:
"""
Check if currently running inside of Rosetta
"""
result = subprocess_wrapper.run(["/usr/sbin/sysctl", "-n", "sysctl.proc_translated"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if result.returncode != 0:
return False
return True if result.stdout.decode().strip() == "1" else False
def create_snapshot(self) -> bool:
"""
Create APFS snapshot of the root volume.
"""
if self.xnu_major < os_data.os_data.big_sur.value:
return True
args = ["/usr/sbin/bless"]
if platform.machine() == "arm64" or self._rosetta_status() is True:
args += ["--mount", self.mount_path, "--create-snapshot"]
else:
args += ["--folder", f"{self.mount_path}/System/Library/CoreServices", "--bootefi", "--create-snapshot"]
result = subprocess_wrapper.run_as_root(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if result.returncode != 0:
logging.error("Failed to create APFS snapshot")
subprocess_wrapper.log(result)
if "Can't use last-sealed-snapshot or create-snapshot on non system volume" in result.stdout.decode():
logging.info("- This is an APFS bug with Monterey and newer! Perform a clean installation to ensure your APFS volume is built correctly")
return False
return True
def revert_snapshot(self) -> bool:
"""
Revert APFS snapshot of the root volume.
"""
if self.xnu_major < os_data.os_data.big_sur.value:
return True
result = subprocess_wrapper.run_as_root(["/usr/sbin/bless", "--mount", self.mount_path, "--bootefi", "--last-sealed-snapshot"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if result.returncode != 0:
logging.error("Failed to revert APFS snapshot")
subprocess_wrapper.log(result)
return False
return True

View File

@@ -38,27 +38,35 @@ This is because Apple removed on-disk binaries (ref: https://github.com/dortania
import logging import logging
import plistlib import plistlib
import subprocess import subprocess
import applescript
from pathlib import Path from pathlib import Path
from datetime import datetime
from .mount import (
RootVolumeMount,
APFSSnapshot
)
from .utilities import (
install_new_file,
remove_file,
PatcherSupportPkgMount,
KernelDebugKitMerge
)
from .. import constants from .. import constants
from ..datasets import os_data from ..datasets import os_data
from ..volume import generate_copy_arguments
from ..support import ( from ..support import (
utilities, utilities,
kdk_handler,
subprocess_wrapper subprocess_wrapper
) )
from . import ( from . import (
sys_patch_detect,
sys_patch_auto,
sys_patch_helpers, sys_patch_helpers,
sys_patch_generate, kernelcache
sys_patch_mount
) )
from .auto_patcher import InstallAutomaticPatchingServices
from .detections import DetectRootPatch, GenerateRootPatchSets
class PatchSysVolume: class PatchSysVolume:
@@ -76,33 +84,24 @@ class PatchSysVolume:
# GUI will detect hardware patches before starting PatchSysVolume() # GUI will detect hardware patches before starting PatchSysVolume()
# However the TUI will not, so allow for data to be passed in manually avoiding multiple calls # However the TUI will not, so allow for data to be passed in manually avoiding multiple calls
if hardware_details is None: if hardware_details is None:
hardware_details = sys_patch_detect.DetectRootPatch(self.computer.real_model, self.constants).detect_patch_set() hardware_details = DetectRootPatch(self.computer.real_model, self.constants).detect_patch_set()
self.hardware_details = hardware_details self.hardware_details = hardware_details
self._init_pathing(custom_root_mount_path=None, custom_data_mount_path=None) self._init_pathing()
self.skip_root_kmutil_requirement = self.hardware_details["Settings: Supports Auxiliary Cache"] self.skip_root_kmutil_requirement = self.hardware_details["Settings: Supports Auxiliary Cache"]
self.mount_obj = sys_patch_mount.SysPatchMount(self.constants.detected_os, self.computer.rosetta_active) self.mount_obj = RootVolumeMount(self.constants.detected_os)
def _init_pathing(self, custom_root_mount_path: Path = None, custom_data_mount_path: Path = None) -> None: def _init_pathing(self) -> None:
""" """
Initializes the pathing for root volume patching Initializes the pathing for root volume patching
Parameters:
custom_root_mount_path (Path): Custom path to mount the root volume
custom_data_mount_path (Path): Custom path to mount the data volume
""" """
if custom_root_mount_path and custom_data_mount_path: self.mount_location_data = ""
self.mount_location = custom_root_mount_path if self.root_supports_snapshot is True:
self.data_mount_location = custom_data_mount_path
elif self.root_supports_snapshot is True:
# Big Sur and newer use APFS snapshots
self.mount_location = "/System/Volumes/Update/mnt1" self.mount_location = "/System/Volumes/Update/mnt1"
self.mount_location_data = ""
else: else:
self.mount_location = "" self.mount_location = ""
self.mount_location_data = ""
self.mount_extensions = f"{self.mount_location}/System/Library/Extensions" self.mount_extensions = f"{self.mount_location}/System/Library/Extensions"
self.mount_application_support = f"{self.mount_location_data}/Library/Application Support" self.mount_application_support = f"{self.mount_location_data}/Library/Application Support"
@@ -135,18 +134,21 @@ class PatchSysVolume:
mounted_system_version = Path(self.mount_location) / "System/Library/CoreServices/SystemVersion.plist" mounted_system_version = Path(self.mount_location) / "System/Library/CoreServices/SystemVersion.plist"
if not mounted_system_version.exists(): if not mounted_system_version.exists():
logging.error("- Failed to find SystemVersion.plist") logging.error("- Failed to find SystemVersion.plist on mounted root volume")
return False return False
try: try:
mounted_data = plistlib.load(open(mounted_system_version, "rb")) mounted_data = plistlib.load(open(mounted_system_version, "rb"))
if mounted_data["ProductBuildVersion"] != self.constants.detected_os_build: if mounted_data["ProductBuildVersion"] != self.constants.detected_os_build:
logging.error(f"- SystemVersion.plist build version mismatch: {mounted_data['ProductBuildVersion']} vs {self.constants.detected_os_build}") logging.error(
f"- SystemVersion.plist build version mismatch: found {mounted_data['ProductVersion']} ({mounted_data['ProductBuildVersion']}), expected {self.constants.detected_os_version} ({self.constants.detected_os_build})"
)
logging.error("An update is in progress on your machine and patching cannot continue until it is cancelled or finished")
return False return False
except: except:
logging.error("- Failed to parse SystemVersion.plist") logging.error("- Failed to parse SystemVersion.plist")
return False return False
return True return True
@@ -159,114 +161,30 @@ class PatchSysVolume:
save_hid_cs (bool): If True, will save the HID CS file before merging KDK save_hid_cs (bool): If True, will save the HID CS file before merging KDK
Required for USB 1.1 downgrades on Ventura and newer Required for USB 1.1 downgrades on Ventura and newer
""" """
self.kdk_path = KernelDebugKitMerge(
if self.skip_root_kmutil_requirement is True: self.constants,
return self.mount_location,
if self.constants.detected_os < os_data.os_data.ventura: self.skip_root_kmutil_requirement
return ).merge(save_hid_cs)
if self.constants.kdk_download_path.exists():
if kdk_handler.KernelDebugKitUtilities().install_kdk_dmg(self.constants.kdk_download_path) is False:
logging.info("Failed to install KDK")
raise Exception("Failed to install KDK")
kdk_obj = kdk_handler.KernelDebugKitObject(self.constants, self.constants.detected_os_build, self.constants.detected_os_version)
if kdk_obj.success is False:
logging.info(f"Unable to get KDK info: {kdk_obj.error_msg}")
raise Exception(f"Unable to get KDK info: {kdk_obj.error_msg}")
if kdk_obj.kdk_already_installed is False:
kdk_download_obj = kdk_obj.retrieve_download()
if not kdk_download_obj:
logging.info(f"Could not retrieve KDK: {kdk_obj.error_msg}")
# Hold thread until download is complete
kdk_download_obj.download(spawn_thread=False)
if kdk_download_obj.download_complete is False:
error_msg = kdk_download_obj.error_msg
logging.info(f"Could not download KDK: {error_msg}")
raise Exception(f"Could not download KDK: {error_msg}")
if kdk_obj.validate_kdk_checksum() is False:
logging.info(f"KDK checksum validation failed: {kdk_obj.error_msg}")
raise Exception(f"KDK checksum validation failed: {kdk_obj.error_msg}")
kdk_handler.KernelDebugKitUtilities().install_kdk_dmg(self.constants.kdk_download_path)
# re-init kdk_obj to get the new kdk_installed_path
kdk_obj = kdk_handler.KernelDebugKitObject(self.constants, self.constants.detected_os_build, self.constants.detected_os_version)
if kdk_obj.success is False:
logging.info(f"Unable to get KDK info: {kdk_obj.error_msg}")
raise Exception(f"Unable to get KDK info: {kdk_obj.error_msg}")
if kdk_obj.kdk_already_installed is False:
# We shouldn't get here, but just in case
logging.warning(f"KDK was not installed, but should have been: {kdk_obj.error_msg}")
raise Exception(f"KDK was not installed, but should have been: {kdk_obj.error_msg}")
kdk_path = Path(kdk_obj.kdk_installed_path) if kdk_obj.kdk_installed_path != "" else None
oclp_plist = Path("/System/Library/CoreServices/OpenCore-Legacy-Patcher.plist")
if (Path(self.mount_location) / Path("System/Library/Extensions/System.kext/PlugIns/Libkern.kext/Libkern")).exists() and oclp_plist.exists():
# KDK was already merged, check if the KDK used is the same as the one we're using
# If not, we'll rsync over with the new KDK
try:
oclp_plist_data = plistlib.load(open(oclp_plist, "rb"))
if "Kernel Debug Kit Used" in oclp_plist_data:
if oclp_plist_data["Kernel Debug Kit Used"] == str(kdk_path):
logging.info("- Matching KDK determined to already be merged, skipping")
return
except:
pass
if kdk_path is None:
logging.info(f"- Unable to find Kernel Debug Kit")
raise Exception("Unable to find Kernel Debug Kit")
self.kdk_path = kdk_path
logging.info(f"- Found KDK at: {kdk_path}")
# Due to some IOHIDFamily oddities, we need to ensure their CodeSignature is retained
cs_path = Path(self.mount_location) / Path("System/Library/Extensions/IOHIDFamily.kext/Contents/PlugIns/IOHIDEventDriver.kext/Contents/_CodeSignature")
if save_hid_cs is True and cs_path.exists():
logging.info("- Backing up IOHIDEventDriver CodeSignature")
# Note it's a folder, not a file
subprocess_wrapper.run_as_root(["/bin/cp", "-r", cs_path, f"{self.constants.payload_path}/IOHIDEventDriver_CodeSignature.bak"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
logging.info(f"- Merging KDK with Root Volume: {kdk_path.name}")
subprocess_wrapper.run_as_root(
# Only merge '/System/Library/Extensions'
# 'Kernels' and 'KernelSupport' is wasted space for root patching (we don't care above dev kernels)
["/usr/bin/rsync", "-r", "-i", "-a", f"{kdk_path}/System/Library/Extensions/", f"{self.mount_location}/System/Library/Extensions"],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
# During reversing, we found that kmutil uses this path to determine whether the KDK was successfully merged
# Best to verify now before we cause any damage
if not (Path(self.mount_location) / Path("System/Library/Extensions/System.kext/PlugIns/Libkern.kext/Libkern")).exists():
logging.info("- Failed to merge KDK with Root Volume")
raise Exception("Failed to merge KDK with Root Volume")
logging.info("- Successfully merged KDK with Root Volume")
# Restore IOHIDEventDriver CodeSignature
if save_hid_cs is True and Path(f"{self.constants.payload_path}/IOHIDEventDriver_CodeSignature.bak").exists():
logging.info("- Restoring IOHIDEventDriver CodeSignature")
if not cs_path.exists():
logging.info(" - CodeSignature folder missing, creating")
subprocess_wrapper.run_as_root(["/bin/mkdir", "-p", cs_path], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
subprocess_wrapper.run_as_root(["/bin/cp", "-r", f"{self.constants.payload_path}/IOHIDEventDriver_CodeSignature.bak", cs_path], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
subprocess_wrapper.run_as_root(["/bin/rm", "-rf", f"{self.constants.payload_path}/IOHIDEventDriver_CodeSignature.bak"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
def _unpatch_root_vol(self): def _unpatch_root_vol(self):
""" """
Reverts APFS snapshot and cleans up any changes made to the root and data volume Reverts APFS snapshot and cleans up any changes made to the root and data volume
""" """
if self.mount_obj.revert_snapshot() is False:
if APFSSnapshot(self.constants.detected_os, self.mount_location).revert_snapshot() is False:
return return
self._clean_skylight_plugins() self._clean_skylight_plugins()
self._delete_nonmetal_enforcement() self._delete_nonmetal_enforcement()
self._clean_auxiliary_kc()
kernelcache.KernelCacheSupport(
mount_location_data=self.mount_location_data,
detected_os=self.constants.detected_os,
skip_root_kmutil_requirement=self.skip_root_kmutil_requirement
).clean_auxiliary_kc()
self.constants.root_patcher_succeeded = True self.constants.root_patcher_succeeded = True
logging.info("- Unpatching complete") logging.info("- Unpatching complete")
logging.info("\nPlease reboot the machine for patches to take effect") logging.info("\nPlease reboot the machine for patches to take effect")
@@ -283,120 +201,46 @@ class PatchSysVolume:
Returns: Returns:
bool: True if successful, False if not bool: True if successful, False if not
""" """
if self._rebuild_kernel_cache() is False:
if self._rebuild_kernel_collection() is True:
self._update_preboot_kernel_cache()
self._rebuild_dyld_shared_cache()
if self._create_new_apfs_snapshot() is True:
self._unmount_root_vol()
logging.info("- Patching complete")
logging.info("\nPlease reboot the machine for patches to take effect")
if self.needs_kmutil_exemptions is True:
logging.info("Note: Apple will require you to open System Preferences -> Security to allow the new kernel extensions to be loaded")
self.constants.root_patcher_succeeded = True
return True
return False
def _rebuild_kernel_collection(self) -> bool:
"""
Rebuilds the Kernel Collection
Supports following KC generation:
- Boot/SysKC (11.0+)
- AuxKC (11.0+)
- PrelinkedKernel (10.15-)
Returns:
bool: True if successful, False if not
"""
logging.info("- Rebuilding Kernel Cache (This may take some time)")
if self.constants.detected_os > os_data.os_data.catalina:
# Base Arguments
args = ["/usr/bin/kmutil", "install"]
if self.skip_root_kmutil_requirement is True:
# Only rebuild the Auxiliary Kernel Collection
args.append("--new")
args.append("aux")
args.append("--boot-path")
args.append(f"{self.mount_location}/System/Library/KernelCollections/BootKernelExtensions.kc")
args.append("--system-path")
args.append(f"{self.mount_location}/System/Library/KernelCollections/SystemKernelExtensions.kc")
else:
# Rebuild Boot, System and Auxiliary Kernel Collections
args.append("--volume-root")
args.append(self.mount_location)
# Build Boot, Sys and Aux KC
args.append("--update-all")
# If multiple kernels found, only build release KCs
args.append("--variant-suffix")
args.append("release")
if self.constants.detected_os >= os_data.os_data.ventura:
# With Ventura, we're required to provide a KDK in some form
# to rebuild the Kernel Cache
#
# However since we already merged the KDK onto root with 'ditto',
# We can add '--allow-missing-kdk' to skip parsing the KDK
#
# This allows us to only delete/overwrite kexts inside of
# /System/Library/Extensions and not the entire KDK
args.append("--allow-missing-kdk")
# 'install' and '--update-all' cannot be used together in Ventura.
# kmutil will request the usage of 'create' instead:
# Warning: kmutil install's usage of --update-all is deprecated.
# Use kmutil create --update-install instead'
args[1] = "create"
if self.needs_kmutil_exemptions is True:
# When installing to '/Library/Extensions', following args skip kext consent
# prompt in System Preferences when SIP's disabled
logging.info(" (You will get a prompt by System Preferences, ignore for now)")
args.append("--no-authentication")
args.append("--no-authorization")
else:
args = ["/usr/sbin/kextcache", "-i", f"{self.mount_location}/"]
result = subprocess_wrapper.run_as_root(args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# kextcache notes:
# - kextcache always returns 0, even if it fails
# - Check the output for 'KernelCache ID' to see if the cache was successfully rebuilt
# kmutil notes:
# - will return 71 on failure to build KCs
# - will return 31 on 'No binaries or codeless kexts were provided'
# - will return -10 if the volume is missing (ie. unmounted by another process)
if result.returncode != 0 or (self.constants.detected_os < os_data.os_data.catalina and "KernelCache ID" not in result.stdout.decode()):
logging.info("- Unable to build new kernel cache")
subprocess_wrapper.log(result)
logging.info("")
logging.info("\nPlease reboot the machine to avoid potential issues rerunning the patcher")
return False return False
if self.skip_root_kmutil_requirement is True: self._update_preboot_kernel_cache()
# Force rebuild the Auxiliary KC self._rebuild_dyld_shared_cache()
result = subprocess_wrapper.run_as_root(["/usr/bin/killall", "syspolicyd", "kernelmanagerd"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
if result.returncode != 0:
logging.info("- Unable to remove kernel extension policy files")
subprocess_wrapper.log(result)
logging.info("")
logging.info("\nPlease reboot the machine to avoid potential issues rerunning the patcher")
return False
for file in ["KextPolicy", "KextPolicy-shm", "KextPolicy-wal"]: if self._create_new_apfs_snapshot() is False:
self._remove_file("/private/var/db/SystemPolicyConfiguration/", file) return False
else:
# Install RSRHelper utility to handle desynced KCs self._unmount_root_vol()
logging.info("- Patching complete")
logging.info("\nPlease reboot the machine for patches to take effect")
if self.needs_kmutil_exemptions is True:
logging.info("Note: Apple will require you to open System Preferences -> Security to allow the new kernel extensions to be loaded")
self.constants.root_patcher_succeeded = True
return True
def _rebuild_kernel_cache(self) -> bool:
"""
Rebuilds the Kernel Cache
"""
result = kernelcache.RebuildKernelCache(
os_version=self.constants.detected_os,
mount_location=self.mount_location,
auxiliary_cache=self.needs_kmutil_exemptions,
auxiliary_cache_only=self.skip_root_kmutil_requirement
).rebuild()
if result is False:
return False
if self.skip_root_kmutil_requirement is False:
sys_patch_helpers.SysPatchHelpers(self.constants).install_rsr_repair_binary() sys_patch_helpers.SysPatchHelpers(self.constants).install_rsr_repair_binary()
logging.info("- Successfully built new kernel cache")
return True return True
@@ -407,7 +251,7 @@ class PatchSysVolume:
Returns: Returns:
bool: True if snapshot was created, False if not bool: True if snapshot was created, False if not
""" """
return self.mount_obj.create_snapshot() return APFSSnapshot(self.constants.detected_os, self.mount_location).create_snapshot()
def _rebuild_dyld_shared_cache(self) -> None: def _rebuild_dyld_shared_cache(self) -> None:
@@ -460,61 +304,6 @@ class PatchSysVolume:
subprocess_wrapper.run_as_root(["/usr/bin/defaults", "delete", "/Library/Preferences/com.apple.CoreDisplay", arg]) subprocess_wrapper.run_as_root(["/usr/bin/defaults", "delete", "/Library/Preferences/com.apple.CoreDisplay", arg])
def _clean_auxiliary_kc(self) -> None:
"""
Clean the Auxiliary Kernel Collection
Logic:
When reverting root volume patches, the AuxKC will still retain the UUID
it was built against. Thus when Boot/SysKC are reverted, Aux will break
To resolve this, delete all installed kexts in /L*/E* and rebuild the AuxKC
We can verify our binaries based off the OpenCore-Legacy-Patcher.plist file
"""
if self.constants.detected_os < os_data.os_data.big_sur:
return
logging.info("- Cleaning Auxiliary Kernel Collection")
oclp_path = "/System/Library/CoreServices/OpenCore-Legacy-Patcher.plist"
if Path(oclp_path).exists():
oclp_plist_data = plistlib.load(Path(oclp_path).open("rb"))
for key in oclp_plist_data:
if isinstance(oclp_plist_data[key], (bool, int)):
continue
for install_type in ["Install", "Install Non-Root"]:
if install_type not in oclp_plist_data[key]:
continue
for location in oclp_plist_data[key][install_type]:
if not location.endswith("Extensions"):
continue
for file in oclp_plist_data[key][install_type][location]:
if not file.endswith(".kext"):
continue
self._remove_file("/Library/Extensions", file)
# Handle situations where users migrated from older OSes with a lot of garbage in /L*/E*
# ex. Nvidia Web Drivers, NetUSB, dosdude1's patches, etc.
# Move if file's age is older than October 2021 (year before Ventura)
if self.constants.detected_os < os_data.os_data.ventura:
return
relocation_path = "/Library/Relocated Extensions"
if not Path(relocation_path).exists():
subprocess_wrapper.run_as_root(["/bin/mkdir", relocation_path])
for file in Path("/Library/Extensions").glob("*.kext"):
try:
if datetime.fromtimestamp(file.stat().st_mtime) < datetime(2021, 10, 1):
logging.info(f" - Relocating {file.name} kext to {relocation_path}")
if Path(relocation_path) / Path(file.name).exists():
subprocess_wrapper.run_as_root(["/bin/rm", "-Rf", relocation_path / Path(file.name)])
subprocess_wrapper.run_as_root(["/bin/mv", file, relocation_path])
except:
# Some users have the most cursed /L*/E* folders
# ex. Symlinks pointing to symlinks pointing to dead files
pass
def _write_patchset(self, patchset: dict) -> None: def _write_patchset(self, patchset: dict) -> None:
""" """
Write patchset information to Root Volume Write patchset information to Root Volume
@@ -530,94 +319,7 @@ class PatchSysVolume:
logging.info("- Writing patchset information to Root Volume") logging.info("- Writing patchset information to Root Volume")
if Path(destination_path_file).exists(): if Path(destination_path_file).exists():
subprocess_wrapper.run_as_root_and_verify(["/bin/rm", destination_path_file], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) subprocess_wrapper.run_as_root_and_verify(["/bin/rm", destination_path_file], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
subprocess_wrapper.run_as_root_and_verify(["/bin/cp", f"{self.constants.payload_path}/{file_name}", destination_path], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) subprocess_wrapper.run_as_root_and_verify(generate_copy_arguments(f"{self.constants.payload_path}/{file_name}", destination_path), stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
def _add_auxkc_support(self, install_file: str, source_folder_path: str, install_patch_directory: str, destination_folder_path: str) -> str:
"""
Patch provided Kext to support Auxiliary Kernel Collection
Logic:
In macOS Ventura, KDKs are required to build new Boot and System KCs
However for some patch sets, we're able to use the Auxiliary KCs with '/Library/Extensions'
kernelmanagerd determines which kext is installed by their 'OSBundleRequired' entry
If a kext is labeled as 'OSBundleRequired: Root' or 'OSBundleRequired: Safe Boot',
kernelmanagerd will require the kext to be installed in the Boot/SysKC
Additionally, kexts starting with 'com.apple.' are not natively allowed to be installed
in the AuxKC. So we need to explicitly set our 'OSBundleRequired' to 'Auxiliary'
Parameters:
install_file (str): Kext file name
source_folder_path (str): Source folder path
install_patch_directory (str): Patch directory
destination_folder_path (str): Destination folder path
Returns:
str: Updated destination folder path
"""
if self.skip_root_kmutil_requirement is False:
return destination_folder_path
if not install_file.endswith(".kext"):
return destination_folder_path
if install_patch_directory != "/System/Library/Extensions":
return destination_folder_path
if self.constants.detected_os < os_data.os_data.ventura:
return destination_folder_path
updated_install_location = str(self.mount_location_data) + "/Library/Extensions"
logging.info(f" - Adding AuxKC support to {install_file}")
plist_path = Path(Path(source_folder_path) / Path(install_file) / Path("Contents/Info.plist"))
plist_data = plistlib.load((plist_path).open("rb"))
# Check if we need to update the 'OSBundleRequired' entry
if not plist_data["CFBundleIdentifier"].startswith("com.apple."):
return updated_install_location
if "OSBundleRequired" in plist_data:
if plist_data["OSBundleRequired"] == "Auxiliary":
return updated_install_location
plist_data["OSBundleRequired"] = "Auxiliary"
plistlib.dump(plist_data, plist_path.open("wb"))
self._check_kexts_needs_authentication(install_file)
return updated_install_location
def _check_kexts_needs_authentication(self, kext_name: str):
"""
Verify whether the user needs to authenticate in System Preferences
Sets 'needs_to_open_preferences' to True if the kext is not in the AuxKC
Logic:
Under 'private/var/db/KernelManagement/AuxKC/CurrentAuxKC/com.apple.kcgen.instructions.plist'
["kextsToBuild"][i]:
["bundlePathMainOS"] = /Library/Extensions/Test.kext
["cdHash"] = Bundle's CDHash (random on ad-hoc signed, static on dev signed)
["teamID"] = Team ID (blank on ad-hoc signed)
To grab the CDHash of a kext, run 'codesign -dvvv <kext_path>'
Parameters:
kext_name (str): Name of the kext to check
"""
try:
aux_cache_path = Path(self.mount_location_data) / Path("/private/var/db/KernelExtensionManagement/AuxKC/CurrentAuxKC/com.apple.kcgen.instructions.plist")
if aux_cache_path.exists():
aux_cache_data = plistlib.load((aux_cache_path).open("rb"))
for kext in aux_cache_data["kextsToBuild"]:
if "bundlePathMainOS" in aux_cache_data["kextsToBuild"][kext]:
if aux_cache_data["kextsToBuild"][kext]["bundlePathMainOS"] == f"/Library/Extensions/{kext_name}":
return
except PermissionError:
pass
logging.info(f" - {kext_name} requires authentication in System Preferences")
self.constants.needs_to_open_preferences = True # Notify in GUI to open System Preferences
def _patch_root_vol(self): def _patch_root_vol(self):
@@ -629,13 +331,13 @@ class PatchSysVolume:
if self.patch_set_dictionary != {}: if self.patch_set_dictionary != {}:
self._execute_patchset(self.patch_set_dictionary) self._execute_patchset(self.patch_set_dictionary)
else: else:
self._execute_patchset(sys_patch_generate.GenerateRootPatchSets(self.computer.real_model, self.constants, self.hardware_details).patchset) self._execute_patchset(GenerateRootPatchSets(self.computer.real_model, self.constants, self.hardware_details).patchset)
if self.constants.wxpython_variant is True and self.constants.detected_os >= os_data.os_data.big_sur: if self.constants.wxpython_variant is True and self.constants.detected_os >= os_data.os_data.big_sur:
needs_daemon = False needs_daemon = False
if self.constants.detected_os >= os_data.os_data.ventura and self.skip_root_kmutil_requirement is False: if self.constants.detected_os >= os_data.os_data.ventura and self.skip_root_kmutil_requirement is False:
needs_daemon = True needs_daemon = True
sys_patch_auto.AutomaticSysPatch(self.constants).install_auto_patcher_launch_agent(kdk_caching_needed=needs_daemon) InstallAutomaticPatchingServices(self.constants).install_auto_patcher_launch_agent(kdk_caching_needed=needs_daemon)
self._rebuild_root_volume() self._rebuild_root_volume()
@@ -648,6 +350,12 @@ class PatchSysVolume:
required_patches (dict): Patchset to execute (generated by sys_patch_generate.GenerateRootPatchSets) required_patches (dict): Patchset to execute (generated by sys_patch_generate.GenerateRootPatchSets)
""" """
kc_support_obj = kernelcache.KernelCacheSupport(
mount_location_data=self.mount_location_data,
detected_os=self.constants.detected_os,
skip_root_kmutil_requirement=self.skip_root_kmutil_requirement
)
source_files_path = str(self.constants.payload_local_binaries_root_path) source_files_path = str(self.constants.payload_local_binaries_root_path)
self._preflight_checks(required_patches, source_files_path) self._preflight_checks(required_patches, source_files_path)
for patch in required_patches: for patch in required_patches:
@@ -661,35 +369,42 @@ class PatchSysVolume:
destination_folder_path = str(self.mount_location) + remove_patch_directory destination_folder_path = str(self.mount_location) + remove_patch_directory
else: else:
destination_folder_path = str(self.mount_location_data) + remove_patch_directory destination_folder_path = str(self.mount_location_data) + remove_patch_directory
self._remove_file(destination_folder_path, remove_patch_file) remove_file(destination_folder_path, remove_patch_file)
for method_install in ["Install", "Install Non-Root"]: for method_install in ["Install", "Install Non-Root"]:
if method_install in required_patches[patch]: if method_install not in required_patches[patch]:
for install_patch_directory in list(required_patches[patch][method_install]): continue
logging.info(f"- Handling Installs in: {install_patch_directory}")
for install_file in list(required_patches[patch][method_install][install_patch_directory]):
source_folder_path = source_files_path + "/" + required_patches[patch][method_install][install_patch_directory][install_file] + install_patch_directory
if method_install == "Install":
destination_folder_path = str(self.mount_location) + install_patch_directory
else:
if install_patch_directory == "/Library/Extensions":
self.needs_kmutil_exemptions = True
self._check_kexts_needs_authentication(install_file)
destination_folder_path = str(self.mount_location_data) + install_patch_directory
updated_destination_folder_path = self._add_auxkc_support(install_file, source_folder_path, install_patch_directory, destination_folder_path) for install_patch_directory in list(required_patches[patch][method_install]):
logging.info(f"- Handling Installs in: {install_patch_directory}")
for install_file in list(required_patches[patch][method_install][install_patch_directory]):
source_folder_path = source_files_path + "/" + required_patches[patch][method_install][install_patch_directory][install_file] + install_patch_directory
if method_install == "Install":
destination_folder_path = str(self.mount_location) + install_patch_directory
else:
if install_patch_directory == "/Library/Extensions":
self.needs_kmutil_exemptions = True
if kc_support_obj.check_kexts_needs_authentication(install_file) is True:
self.constants.needs_to_open_preferences = True
if destination_folder_path != updated_destination_folder_path: destination_folder_path = str(self.mount_location_data) + install_patch_directory
# Update required_patches to reflect the new destination folder path
if updated_destination_folder_path not in required_patches[patch][method_install]:
required_patches[patch][method_install].update({updated_destination_folder_path: {}})
required_patches[patch][method_install][updated_destination_folder_path].update({install_file: required_patches[patch][method_install][install_patch_directory][install_file]})
required_patches[patch][method_install][install_patch_directory].pop(install_file)
destination_folder_path = updated_destination_folder_path updated_destination_folder_path = kc_support_obj.add_auxkc_support(install_file, source_folder_path, install_patch_directory, destination_folder_path)
if updated_destination_folder_path != destination_folder_path:
if kc_support_obj.check_kexts_needs_authentication(install_file) is True:
self.constants.needs_to_open_preferences = True
self._install_new_file(source_folder_path, destination_folder_path, install_file) if destination_folder_path != updated_destination_folder_path:
# Update required_patches to reflect the new destination folder path
if updated_destination_folder_path not in required_patches[patch][method_install]:
required_patches[patch][method_install].update({updated_destination_folder_path: {}})
required_patches[patch][method_install][updated_destination_folder_path].update({install_file: required_patches[patch][method_install][install_patch_directory][install_file]})
required_patches[patch][method_install][install_patch_directory].pop(install_file)
destination_folder_path = updated_destination_folder_path
install_new_file(source_folder_path, destination_folder_path, install_file)
if "Processes" in required_patches[patch]: if "Processes" in required_patches[patch]:
for process in required_patches[patch]["Processes"]: for process in required_patches[patch]["Processes"]:
@@ -701,6 +416,7 @@ class PatchSysVolume:
else: else:
logging.info(f"- Running Process:\n{process}") logging.info(f"- Running Process:\n{process}")
subprocess_wrapper.run_and_verify(process, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True) subprocess_wrapper.run_and_verify(process, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
if any(x in required_patches for x in ["AMD Legacy GCN", "AMD Legacy Polaris", "AMD Legacy Vega"]): if any(x in required_patches for x in ["AMD Legacy GCN", "AMD Legacy Polaris", "AMD Legacy Vega"]):
sys_patch_helpers.SysPatchHelpers(self.constants).disable_window_server_caching() sys_patch_helpers.SysPatchHelpers(self.constants).disable_window_server_caching()
if "Metal 3802 Common Extended" in required_patches: if "Metal 3802 Common Extended" in required_patches:
@@ -725,7 +441,11 @@ class PatchSysVolume:
# Make sure non-Metal Enforcement preferences are not present # Make sure non-Metal Enforcement preferences are not present
self._delete_nonmetal_enforcement() self._delete_nonmetal_enforcement()
# Make sure we clean old kexts in /L*/E* that are not in the patchset # Make sure we clean old kexts in /L*/E* that are not in the patchset
self._clean_auxiliary_kc() kernelcache.KernelCacheSupport(
mount_location_data=self.mount_location_data,
detected_os=self.constants.detected_os,
skip_root_kmutil_requirement=self.skip_root_kmutil_requirement
).clean_auxiliary_kc()
# Make sure SNB kexts are compatible with the host # Make sure SNB kexts are compatible with the host
if "Intel Sandy Bridge" in required_patches: if "Intel Sandy Bridge" in required_patches:
@@ -734,12 +454,13 @@ class PatchSysVolume:
for patch in required_patches: for patch in required_patches:
# Check if all files are present # Check if all files are present
for method_type in ["Install", "Install Non-Root"]: for method_type in ["Install", "Install Non-Root"]:
if method_type in required_patches[patch]: if method_type not in required_patches[patch]:
for install_patch_directory in required_patches[patch][method_type]: continue
for install_file in required_patches[patch][method_type][install_patch_directory]: for install_patch_directory in required_patches[patch][method_type]:
source_file = source_files_path + "/" + required_patches[patch][method_type][install_patch_directory][install_file] + install_patch_directory + "/" + install_file for install_file in required_patches[patch][method_type][install_patch_directory]:
if not Path(source_file).exists(): source_file = source_files_path + "/" + required_patches[patch][method_type][install_patch_directory][install_file] + install_patch_directory + "/" + install_file
raise Exception(f"Failed to find {source_file}") if not Path(source_file).exists():
raise Exception(f"Failed to find {source_file}")
# Ensure KDK is properly installed # Ensure KDK is properly installed
self._merge_kdk_with_root(save_hid_cs=True if "Legacy USB 1.1" in required_patches else False) self._merge_kdk_with_root(save_hid_cs=True if "Legacy USB 1.1" in required_patches else False)
@@ -747,188 +468,6 @@ class PatchSysVolume:
logging.info("- Finished Preflight, starting patching") logging.info("- Finished Preflight, starting patching")
def _install_new_file(self, source_folder: Path, destination_folder: Path, file_name: str) -> None:
"""
Installs a new file to the destination folder
File handling logic:
- .frameworks are merged with the destination folder
- Other files are deleted and replaced (ex. .kexts, .apps)
Parameters:
source_folder (Path): Path to the source folder
destination_folder (Path): Path to the destination folder
file_name (str): Name of the file to install
"""
file_name_str = str(file_name)
if not Path(destination_folder).exists():
logging.info(f" - Skipping {file_name}, cannot locate {source_folder}")
return
if file_name_str.endswith(".framework"):
# merge with rsync
logging.info(f" - Installing: {file_name}")
subprocess_wrapper.run_as_root(["/usr/bin/rsync", "-r", "-i", "-a", f"{source_folder}/{file_name}", f"{destination_folder}/"], stdout=subprocess.PIPE)
self._fix_permissions(destination_folder + "/" + file_name)
elif Path(source_folder + "/" + file_name_str).is_dir():
# Applicable for .kext, .app, .plugin, .bundle, all of which are directories
if Path(destination_folder + "/" + file_name).exists():
logging.info(f" - Found existing {file_name}, overwriting...")
subprocess_wrapper.run_as_root_and_verify(["/bin/rm", "-R", f"{destination_folder}/{file_name}"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
else:
logging.info(f" - Installing: {file_name}")
subprocess_wrapper.run_as_root_and_verify(["/bin/cp", "-R", f"{source_folder}/{file_name}", destination_folder], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
self._fix_permissions(destination_folder + "/" + file_name)
else:
# Assume it's an individual file, replace as normal
if Path(destination_folder + "/" + file_name).exists():
logging.info(f" - Found existing {file_name}, overwriting...")
subprocess_wrapper.run_as_root_and_verify(["/bin/rm", f"{destination_folder}/{file_name}"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
else:
logging.info(f" - Installing: {file_name}")
subprocess_wrapper.run_as_root_and_verify(["/bin/cp", f"{source_folder}/{file_name}", destination_folder], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
self._fix_permissions(destination_folder + "/" + file_name)
def _remove_file(self, destination_folder: Path, file_name: str) -> None:
"""
Removes a file from the destination folder
Parameters:
destination_folder (Path): Path to the destination folder
file_name (str): Name of the file to remove
"""
if Path(destination_folder + "/" + file_name).exists():
logging.info(f" - Removing: {file_name}")
if Path(destination_folder + "/" + file_name).is_dir():
subprocess_wrapper.run_as_root_and_verify(["/bin/rm", "-R", f"{destination_folder}/{file_name}"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
else:
subprocess_wrapper.run_as_root_and_verify(["/bin/rm", f"{destination_folder}/{file_name}"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
def _fix_permissions(self, destination_file: Path) -> None:
"""
Fix file permissions for a given file or directory
"""
chmod_args = ["/bin/chmod", "-Rf", "755", destination_file]
chown_args = ["/usr/sbin/chown", "-Rf", "root:wheel", destination_file]
if not Path(destination_file).is_dir():
# Strip recursive arguments
chmod_args.pop(1)
chown_args.pop(1)
subprocess_wrapper.run_as_root_and_verify(chmod_args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
subprocess_wrapper.run_as_root_and_verify(chown_args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
def _check_files(self) -> bool:
"""
Check if all files are present (primarily PatcherSupportPkg resources)
Returns:
bool: True if all files are present, False otherwise
"""
if Path(self.constants.payload_local_binaries_root_path).exists():
logging.info("- Local PatcherSupportPkg resources available, continuing...")
return True
if Path(self.constants.payload_local_binaries_root_path_dmg).exists():
logging.info("- Local PatcherSupportPkg resources available, mounting...")
output = subprocess.run(
[
"/usr/bin/hdiutil", "attach", "-noverify", f"{self.constants.payload_local_binaries_root_path_dmg}",
"-mountpoint", Path(self.constants.payload_path / Path("Universal-Binaries")),
"-nobrowse",
"-shadow", Path(self.constants.payload_path / Path("Universal-Binaries_overlay")),
"-passphrase", "password"
],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
if output.returncode != 0:
logging.info("- Failed to mount Universal-Binaries.dmg")
subprocess_wrapper.log(output)
return False
logging.info("- Mounted Universal-Binaries.dmg")
if self.constants.cli_mode is False and Path(self.constants.overlay_psp_path_dmg).exists() and Path("~/.dortania_developer").expanduser().exists():
icon_path = str(self.constants.app_icon_path).replace("/", ":")[1:]
msg = "Welcome to the DortaniaInternal Program, please provided the decryption key to access internal resources. Press cancel to skip."
password = Path("~/.dortania_developer_key").expanduser().read_text().strip() if Path("~/.dortania_developer_key").expanduser().exists() else ""
for i in range(3):
try:
if password == "":
password = applescript.AppleScript(
f"""
set theResult to display dialog "{msg}" default answer "" with hidden answer with title "OpenCore Legacy Patcher" with icon file "{icon_path}"
return the text returned of theResult
"""
).run()
result = subprocess.run(
[
"/usr/bin/hdiutil", "attach", "-noverify", f"{self.constants.overlay_psp_path_dmg}",
"-mountpoint", Path(self.constants.payload_path / Path("DortaniaInternal")),
"-nobrowse",
"-passphrase", password
],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
if result.returncode == 0:
logging.info("- Mounted DortaniaInternal resources")
result = subprocess.run(
[
"/usr/bin/ditto", f"{self.constants.payload_path / Path('DortaniaInternal')}", f"{self.constants.payload_path / Path('Universal-Binaries')}"
],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
if result.returncode == 0:
return True
logging.info("- Failed to merge DortaniaInternal resources")
subprocess_wrapper.log(result)
return False
logging.info("- Failed to mount DortaniaInternal resources")
subprocess_wrapper.log(result)
if "Authentication error" not in result.stdout.decode():
try:
# Display that the disk image might be corrupted
applescript.AppleScript(
f"""
display dialog "Failed to mount DortaniaInternal resources, please file an internal radar:\n\n{result.stdout.decode()}" with title "OpenCore Legacy Patcher" with icon file "{icon_path}"
"""
).run()
return False
except Exception as e:
pass
break
msg = f"Decryption failed, please try again. {2 - i} attempts remaining. "
password = ""
if i == 2:
applescript.AppleScript(
f"""
display dialog "Failed to mount DortaniaInternal resources, too many incorrect passwords. If this continues with the correct decryption key, please file an internal radar." with title "OpenCore Legacy Patcher" with icon file "{icon_path}"
"""
).run()
return False
except Exception as e:
break
return True
logging.info("- PatcherSupportPkg resources missing, Patcher likely corrupted!!!")
return False
# Entry Function # Entry Function
def start_patch(self): def start_patch(self):
""" """
@@ -937,27 +476,33 @@ class PatchSysVolume:
logging.info("- Starting Patch Process") logging.info("- Starting Patch Process")
logging.info(f"- Determining Required Patch set for Darwin {self.constants.detected_os}") logging.info(f"- Determining Required Patch set for Darwin {self.constants.detected_os}")
self.patch_set_dictionary = sys_patch_generate.GenerateRootPatchSets(self.computer.real_model, self.constants, self.hardware_details).patchset self.patch_set_dictionary = GenerateRootPatchSets(self.computer.real_model, self.constants, self.hardware_details).patchset
if self.patch_set_dictionary == {}: if self.patch_set_dictionary == {}:
logging.info("- No Root Patches required for your machine!") logging.info("- No Root Patches required for your machine!")
return return
logging.info("- Verifying whether Root Patching possible") logging.info("- Verifying whether Root Patching possible")
if sys_patch_detect.DetectRootPatch(self.computer.real_model, self.constants).verify_patch_allowed(print_errors=not self.constants.wxpython_variant) is False: if DetectRootPatch(self.computer.real_model, self.constants).verify_patch_allowed(print_errors=not self.constants.wxpython_variant) is False:
logging.error("- Cannot continue with patching!!!") logging.error("- Cannot continue with patching!!!")
return return
logging.info("- Patcher is capable of patching") logging.info("- Patcher is capable of patching")
if self._check_files(): if PatcherSupportPkgMount(self.constants).mount() is False:
if self._mount_root_vol() is True: logging.error("- Critical resources missing, cannot continue with patching!!!")
if self._run_sanity_checks(): return
self._patch_root_vol()
else: if self._mount_root_vol() is False:
self._unmount_root_vol() logging.error("- Failed to mount root volume, cannot continue with patching!!!")
logging.info("- Please ensure that you do not have any updates pending") return
else:
logging.info("- Recommend rebooting the machine and trying to patch again") if self._run_sanity_checks() is False:
self._unmount_root_vol()
logging.error("- Failed sanity checks, cannot continue with patching!!!")
logging.error("- Please ensure that you do not have any updates pending")
return
self._patch_root_vol()
def start_unpatch(self) -> None: def start_unpatch(self) -> None:
@@ -966,11 +511,12 @@ class PatchSysVolume:
""" """
logging.info("- Starting Unpatch Process") logging.info("- Starting Unpatch Process")
if sys_patch_detect.DetectRootPatch(self.computer.real_model, self.constants).verify_patch_allowed(print_errors=True) is False: if DetectRootPatch(self.computer.real_model, self.constants).verify_patch_allowed(print_errors=True) is False:
logging.error("- Cannot continue with unpatching!!!") logging.error("- Cannot continue with unpatching!!!")
return return
if self._mount_root_vol() is True: if self._mount_root_vol() is False:
self._unpatch_root_vol() logging.error("- Failed to mount root volume, cannot continue with unpatching!!!")
else: return
logging.info("- Recommend rebooting the machine and trying to patch again")
self._unpatch_root_vol()

View File

@@ -14,9 +14,9 @@ from datetime import datetime
from .. import constants from .. import constants
from ..datasets import os_data from ..datasets import os_data
from ..volume import generate_copy_arguments
from ..support import ( from ..support import (
bplist,
generate_smbios, generate_smbios,
subprocess_wrapper subprocess_wrapper
) )
@@ -82,7 +82,7 @@ class SysPatchHelpers:
Generate patchset file for user reference Generate patchset file for user reference
Parameters: Parameters:
patchset (dict): Dictionary of patchset, see sys_patch_detect.py and sys_patch_dict.py patchset (dict): Dictionary of patchset, see detect.py and sys_patch_dict.py
file_name (str): Name of the file to write to file_name (str): Name of the file to write to
kdk_used (Path): Path to the KDK used, if any kdk_used (Path): Path to the KDK used, if any
@@ -134,68 +134,18 @@ class SysPatchHelpers:
""" """
if self.constants.detected_os < os_data.os_data.ventura: if self.constants.detected_os < os_data.os_data.ventura:
return return
logging.info("Disabling WindowServer Caching") logging.info("Disabling WindowServer Caching")
# Invoke via 'bash -c' to resolve pathing # Invoke via 'bash -c' to resolve pathing
subprocess_wrapper.run_as_root(["/bin/bash", "-c", "rm -rf /private/var/folders/*/*/*/WindowServer/com.apple.WindowServer"]) subprocess_wrapper.run_as_root(["/bin/bash", "-c", "/bin/rm -rf /private/var/folders/*/*/*/WindowServer/com.apple.WindowServer"])
# Disable writing to WindowServer folder # Disable writing to WindowServer folder
subprocess_wrapper.run_as_root(["/bin/bash", "-c", "chflags uchg /private/var/folders/*/*/*/WindowServer"]) subprocess_wrapper.run_as_root(["/bin/bash", "-c", "/usr/bin/chflags uchg /private/var/folders/*/*/*/WindowServer"])
# Reference: # Reference:
# To reverse write lock: # To reverse write lock:
# 'chflags nouchg /private/var/folders/*/*/*/WindowServer' # 'chflags nouchg /private/var/folders/*/*/*/WindowServer'
def remove_news_widgets(self):
"""
Remove News Widgets from Notification Centre
On Ivy Bridge and Haswell iGPUs, RenderBox will crash the News Widgets in
Notification Centre. To ensure users can access Notifications normally,
we manually remove all News Widgets
"""
if self.constants.detected_os < os_data.os_data.ventura:
return
logging.info("Parsing Notification Centre Widgets")
file_path = "~/Library/Containers/com.apple.notificationcenterui/Data/Library/Preferences/com.apple.notificationcenterui.plist"
file_path = Path(file_path).expanduser()
if not file_path.exists():
logging.info("- Defaults file not found, skipping")
return
did_find = False
with open(file_path, "rb") as f:
data = plistlib.load(f)
if "widgets" not in data:
return
if "instances" not in data["widgets"]:
return
for widget in list(data["widgets"]["instances"]):
widget_data = bplist.BPListReader(widget).parse()
for entry in widget_data:
if 'widget' not in entry:
continue
sub_data = bplist.BPListReader(widget_data[entry]).parse()
for sub_entry in sub_data:
if not '$object' in sub_entry:
continue
if not b'com.apple.news' in sub_data[sub_entry][2]:
continue
logging.info(f"- Found News Widget to remove: {sub_data[sub_entry][2].decode('ascii')}")
data["widgets"]["instances"].remove(widget)
did_find = True
if did_find:
with open(file_path, "wb") as f:
plistlib.dump(data, f, sort_keys=False)
subprocess.run(["/usr/bin/killall", "NotificationCenter"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
def install_rsr_repair_binary(self): def install_rsr_repair_binary(self):
""" """
Installs RSRRepair Installs RSRRepair
@@ -283,6 +233,6 @@ class SysPatchHelpers:
src_dir = f"{LIBRARY_DIR}/{file.name}" src_dir = f"{LIBRARY_DIR}/{file.name}"
if not Path(f"{DEST_DIR}/lib").exists(): if not Path(f"{DEST_DIR}/lib").exists():
subprocess_wrapper.run_as_root_and_verify(["/bin/cp", "-cR", f"{src_dir}/lib", f"{DEST_DIR}/"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) subprocess_wrapper.run_as_root_and_verify(generate_copy_arguments(f"{src_dir}/lib", f"{DEST_DIR}/"), stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
break break

View File

@@ -0,0 +1,6 @@
"""
utilities: General utility functions for root volume patching
"""
from .files import install_new_file, remove_file, fix_permissions
from .dmg_mount import PatcherSupportPkgMount
from .kdk_merge import KernelDebugKitMerge

View File

@@ -0,0 +1,181 @@
"""
dmg_mount.py: PatcherSupportPkg DMG Mounting. Handles Universal-Binaries and DortaniaInternalResources DMGs.
"""
import logging
import subprocess
import applescript
from pathlib import Path
from ... import constants
from ...support import subprocess_wrapper
class PatcherSupportPkgMount:
def __init__(self, global_constants: constants.Constants) -> None:
self.constants: constants.Constants = global_constants
self.icon_path = str(self.constants.app_icon_path).replace("/", ":")[1:]
def _mount_universal_binaries_dmg(self) -> bool:
"""
Mount PatcherSupportPkg's Universal-Binaries.dmg
"""
if not Path(self.constants.payload_local_binaries_root_path_dmg).exists():
logging.info("- PatcherSupportPkg resources missing, Patcher likely corrupted!!!")
return False
output = subprocess.run(
[
"/usr/bin/hdiutil", "attach", "-noverify", f"{self.constants.payload_local_binaries_root_path_dmg}",
"-mountpoint", Path(self.constants.payload_path / Path("Universal-Binaries")),
"-nobrowse",
"-shadow", Path(self.constants.payload_path / Path("Universal-Binaries_overlay")),
"-passphrase", "password"
],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
if output.returncode != 0:
logging.info("- Failed to mount Universal-Binaries.dmg")
subprocess_wrapper.log(output)
return False
logging.info("- Mounted Universal-Binaries.dmg")
return True
def _mount_dortania_internal_resources_dmg(self) -> bool:
"""
Mount PatcherSupportPkg's DortaniaInternalResources.dmg (if available)
"""
if not Path(self.constants.overlay_psp_path_dmg).exists():
return True
if not Path("~/.dortania_developer").expanduser().exists():
return True
if self.constants.cli_mode is True:
return True
logging.info("- Found DortaniaInternal resources, mounting...")
for i in range(3):
key = self._request_decryption_key(i)
output = subprocess.run(
[
"/usr/bin/hdiutil", "attach", "-noverify", f"{self.constants.overlay_psp_path_dmg}",
"-mountpoint", Path(self.constants.payload_path / Path("DortaniaInternal")),
"-nobrowse",
"-passphrase", key
],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
if output.returncode != 0:
logging.info("- Failed to mount DortaniaInternal resources")
subprocess_wrapper.log(output)
if "Authentication error" not in output.stdout.decode():
self._display_authentication_error()
if i == 2:
self._display_too_many_attempts()
return False
logging.info("- Mounted DortaniaInternal resources")
return self._merge_dortania_internal_resources()
def _merge_dortania_internal_resources(self) -> bool:
"""
Merge DortaniaInternal resources with Universal-Binaries
"""
result = subprocess.run(
[
"/usr/bin/ditto", f"{self.constants.payload_path / Path('DortaniaInternal')}", f"{self.constants.payload_path / Path('Universal-Binaries')}"
],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
if result.returncode != 0:
logging.info("- Failed to merge DortaniaInternal resources")
subprocess_wrapper.log(result)
return False
return True
def _request_decryption_key(self, attempt: int) -> str:
"""
Fetch the decryption key for DortaniaInternalResources.dmg
"""
# Only return on first attempt
if attempt == 0:
if Path("~/.dortania_developer_key").expanduser().exists():
return Path("~/.dortania_developer_key").expanduser().read_text().strip()
password = ""
msg = "Welcome to the DortaniaInternal Program, please provided the decryption key to access internal resources. Press cancel to skip."
if attempt > 0:
msg = f"Decryption failed, please try again. {2 - attempt} attempts remaining. "
try:
password = applescript.AppleScript(
f"""
set theResult to display dialog "{msg}" default answer "" with hidden answer with title "OpenCore Legacy Patcher" with icon file "{self.icon_path}"
return the text returned of theResult
"""
).run()
except Exception as e:
pass
return password
def _display_authentication_error(self) -> None:
"""
Display authentication error dialog
"""
try:
applescript.AppleScript(
f"""
display dialog "Failed to mount DortaniaInternal resources, please file an internal radar." with title "OpenCore Legacy Patcher" with icon file "{self.icon_path}"
"""
).run()
except Exception as e:
pass
def _display_too_many_attempts(self) -> None:
"""
Display too many attempts dialog
"""
try:
applescript.AppleScript(
f"""
display dialog "Failed to mount DortaniaInternal resources, too many incorrect passwords. If this continues with the correct decryption key, please file an internal radar." with title "OpenCore Legacy Patcher" with icon file "{self.icon_path}"
"""
).run()
except Exception as e:
pass
def mount(self) -> bool:
"""
Mount PatcherSupportPkg resources
Returns:
bool: True if all resources are mounted, False otherwise
"""
# If already mounted, skip
if Path(self.constants.payload_local_binaries_root_path).exists():
logging.info("- Local PatcherSupportPkg resources available, continuing...")
return True
if self._mount_universal_binaries_dmg() is False:
return False
if self._mount_dortania_internal_resources_dmg() is False:
return False
return True

View File

@@ -0,0 +1,88 @@
"""
utilities.py: Supporting functions for file handling during root volume patching
"""
import logging
import subprocess
from pathlib import Path
from ...volume import generate_copy_arguments
from ...support import subprocess_wrapper
def install_new_file(source_folder: Path, destination_folder: Path, file_name: str) -> None:
"""
Installs a new file to the destination folder
File handling logic:
- .frameworks are merged with the destination folder
- Other files are deleted and replaced (ex. .kexts, .apps)
Parameters:
source_folder (Path): Path to the source folder
destination_folder (Path): Path to the destination folder
file_name (str): Name of the file to install
"""
file_name_str = str(file_name)
if not Path(destination_folder).exists():
logging.info(f" - Skipping {file_name}, cannot locate {source_folder}")
return
if file_name_str.endswith(".framework"):
# merge with rsync
logging.info(f" - Installing: {file_name}")
subprocess_wrapper.run_as_root(["/usr/bin/rsync", "-r", "-i", "-a", f"{source_folder}/{file_name}", f"{destination_folder}/"], stdout=subprocess.PIPE)
fix_permissions(destination_folder + "/" + file_name)
elif Path(source_folder + "/" + file_name_str).is_dir():
# Applicable for .kext, .app, .plugin, .bundle, all of which are directories
if Path(destination_folder + "/" + file_name).exists():
logging.info(f" - Found existing {file_name}, overwriting...")
subprocess_wrapper.run_as_root_and_verify(["/bin/rm", "-R", f"{destination_folder}/{file_name}"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
else:
logging.info(f" - Installing: {file_name}")
subprocess_wrapper.run_as_root_and_verify(generate_copy_arguments(f"{source_folder}/{file_name}", destination_folder), stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
fix_permissions(destination_folder + "/" + file_name)
else:
# Assume it's an individual file, replace as normal
if Path(destination_folder + "/" + file_name).exists():
logging.info(f" - Found existing {file_name}, overwriting...")
subprocess_wrapper.run_as_root_and_verify(["/bin/rm", f"{destination_folder}/{file_name}"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
else:
logging.info(f" - Installing: {file_name}")
subprocess_wrapper.run_as_root_and_verify(generate_copy_arguments(f"{source_folder}/{file_name}", destination_folder), stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
fix_permissions(destination_folder + "/" + file_name)
def remove_file(destination_folder: Path, file_name: str) -> None:
"""
Removes a file from the destination folder
Parameters:
destination_folder (Path): Path to the destination folder
file_name (str): Name of the file to remove
"""
if Path(destination_folder + "/" + file_name).exists():
logging.info(f" - Removing: {file_name}")
if Path(destination_folder + "/" + file_name).is_dir():
subprocess_wrapper.run_as_root_and_verify(["/bin/rm", "-R", f"{destination_folder}/{file_name}"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
else:
subprocess_wrapper.run_as_root_and_verify(["/bin/rm", f"{destination_folder}/{file_name}"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
def fix_permissions(destination_file: Path) -> None:
"""
Fix file permissions for a given file or directory
"""
chmod_args = ["/bin/chmod", "-Rf", "755", destination_file]
chown_args = ["/usr/sbin/chown", "-Rf", "root:wheel", destination_file]
if not Path(destination_file).is_dir():
# Strip recursive arguments
chmod_args.pop(1)
chown_args.pop(1)
subprocess_wrapper.run_as_root_and_verify(chmod_args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
subprocess_wrapper.run_as_root_and_verify(chown_args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

View File

@@ -0,0 +1,167 @@
import logging
import subprocess
import plistlib
from pathlib import Path
from ... import constants
from ...datasets import os_data
from ...support import subprocess_wrapper, kdk_handler
from ...volume import generate_copy_arguments
class KernelDebugKitMerge:
def __init__(self, global_constants: constants.Constants, mount_location: str, skip_root_kmutil_requirement: bool) -> None:
self.constants: constants.Constants = global_constants
self.mount_location = mount_location
self.skip_root_kmutil_requirement = skip_root_kmutil_requirement
def _matching_kdk_already_merged(self, kdk_path: str) -> bool:
"""
Check whether the KDK is already merged with the root volume
"""
oclp_plist = Path("/System/Library/CoreServices/OpenCore-Legacy-Patcher.plist")
if not oclp_plist.exists():
return False
if not (Path(self.mount_location) / Path("System/Library/Extensions/System.kext/PlugIns/Libkern.kext/Libkern")).exists():
return False
try:
oclp_plist_data = plistlib.load(open(oclp_plist, "rb"))
if "Kernel Debug Kit Used" not in oclp_plist_data:
return False
if oclp_plist_data["Kernel Debug Kit Used"] == str(kdk_path):
logging.info("- Matching KDK determined to already be merged, skipping")
return True
except:
pass
return False
def _backup_hid_cs(self) -> None:
"""
Due to some IOHIDFamily oddities, we need to ensure their CodeSignature is retained
"""
cs_path = Path(self.mount_location) / Path("System/Library/Extensions/IOHIDFamily.kext/Contents/PlugIns/IOHIDEventDriver.kext/Contents/_CodeSignature")
if not cs_path.exists():
return
logging.info("- Backing up IOHIDEventDriver CodeSignature")
subprocess_wrapper.run_as_root(generate_copy_arguments(cs_path, f"{self.constants.payload_path}/IOHIDEventDriver_CodeSignature.bak"), stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
def _restore_hid_cs(self) -> None:
"""
Restore IOHIDEventDriver CodeSignature
"""
if not Path(f"{self.constants.payload_path}/IOHIDEventDriver_CodeSignature.bak").exists():
return
logging.info("- Restoring IOHIDEventDriver CodeSignature")
cs_path = Path(self.mount_location) / Path("System/Library/Extensions/IOHIDFamily.kext/Contents/PlugIns/IOHIDEventDriver.kext/Contents/_CodeSignature")
if not cs_path.exists():
logging.info(" - CodeSignature folder missing, creating")
subprocess_wrapper.run_as_root(["/bin/mkdir", "-p", cs_path], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
subprocess_wrapper.run_as_root(generate_copy_arguments(f"{self.constants.payload_path}/IOHIDEventDriver_CodeSignature.bak", cs_path), stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
subprocess_wrapper.run_as_root(["/bin/rm", "-rf", f"{self.constants.payload_path}/IOHIDEventDriver_CodeSignature.bak"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
def _merge_kdk(self, kdk_path: str) -> None:
"""
Merge Kernel Debug Kit (KDK) with the root volume
"""
logging.info(f"- Merging KDK with Root Volume: {Path(kdk_path).name}")
subprocess_wrapper.run_as_root(
# Only merge '/System/Library/Extensions'
# 'Kernels' and 'KernelSupport' is wasted space for root patching (we don't care above dev kernels)
["/usr/bin/rsync", "-r", "-i", "-a", f"{kdk_path}/System/Library/Extensions/", f"{self.mount_location}/System/Library/Extensions"],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT
)
if not (Path(self.mount_location) / Path("System/Library/Extensions/System.kext/PlugIns/Libkern.kext/Libkern")).exists():
logging.info("- Failed to merge KDK with Root Volume")
raise Exception("Failed to merge KDK with Root Volume")
logging.info("- Successfully merged KDK with Root Volume")
def merge(self, save_hid_cs: bool = False) -> str:
"""
Merge the Kernel Debug Kit (KDK) with the root volume
Returns KDK used
"""
if self.skip_root_kmutil_requirement is True:
return None
if self.constants.detected_os < os_data.os_data.ventura:
return None
# If a KDK was pre-downloaded, install it
if self.constants.kdk_download_path.exists():
if kdk_handler.KernelDebugKitUtilities().install_kdk_dmg(self.constants.kdk_download_path) is False:
logging.info("Failed to install KDK")
raise Exception("Failed to install KDK")
# Next, grab KDK information (ie. what's the latest KDK for this OS)
kdk_obj = kdk_handler.KernelDebugKitObject(self.constants, self.constants.detected_os_build, self.constants.detected_os_version)
if kdk_obj.success is False:
logging.info(f"Unable to get KDK info: {kdk_obj.error_msg}")
raise Exception(f"Unable to get KDK info: {kdk_obj.error_msg}")
# If no KDK is installed, download and install it
if kdk_obj.kdk_already_installed is False:
kdk_download_obj = kdk_obj.retrieve_download()
if not kdk_download_obj:
logging.info(f"Could not retrieve KDK: {kdk_obj.error_msg}")
raise Exception(f"Could not retrieve KDK: {kdk_obj.error_msg}")
# Hold thread until download is complete
kdk_download_obj.download(spawn_thread=False)
if kdk_download_obj.download_complete is False:
error_msg = kdk_download_obj.error_msg
logging.info(f"Could not download KDK: {error_msg}")
raise Exception(f"Could not download KDK: {error_msg}")
if kdk_obj.validate_kdk_checksum() is False:
logging.info(f"KDK checksum validation failed: {kdk_obj.error_msg}")
raise Exception(f"KDK checksum validation failed: {kdk_obj.error_msg}")
kdk_handler.KernelDebugKitUtilities().install_kdk_dmg(self.constants.kdk_download_path)
# re-init kdk_obj to get the new kdk_installed_path
kdk_obj = kdk_handler.KernelDebugKitObject(self.constants, self.constants.detected_os_build, self.constants.detected_os_version)
if kdk_obj.success is False:
logging.info(f"Unable to get KDK info: {kdk_obj.error_msg}")
raise Exception(f"Unable to get KDK info: {kdk_obj.error_msg}")
if kdk_obj.kdk_already_installed is False:
# We shouldn't get here, but just in case
logging.warning(f"KDK was not installed, but should have been: {kdk_obj.error_msg}")
raise Exception(f"KDK was not installed, but should have been: {kdk_obj.error_msg}")
kdk_path = Path(kdk_obj.kdk_installed_path) if kdk_obj.kdk_installed_path != "" else None
if kdk_path is None:
logging.info(f"- Unable to find Kernel Debug Kit")
raise Exception("Unable to find Kernel Debug Kit")
logging.info(f"- Found KDK at: {kdk_path}")
if self._matching_kdk_already_merged(kdk_path):
return kdk_path
if save_hid_cs is True:
self._backup_hid_cs()
self._merge_kdk(kdk_path)
if save_hid_cs is True:
self._restore_hid_cs()
return kdk_path

View File

@@ -0,0 +1,46 @@
"""
volume: Volume utilities for macOS
-------------------------------------------------------------------------------
Usage - Checking if Copy on Write is supported between source and destination:
>>> from volume import can_copy_on_write
>>> source = "/path/to/source"
>>> destination = "/path/to/destination"
>>> can_copy_on_write(source, destination)
True
-------------------------------------------------------------------------------
Usage - Generating copy arguments:
>>> from volume import generate_copy_arguments
>>> source = "/path/to/source"
>>> destination = "/path/to/destination"
>>> _command = generate_copy_arguments(source, destination)
>>> _command
['/bin/cp', '-c', '/path/to/source', '/path/to/destination']
-------------------------------------------------------------------------------
Usage - Querying volume properties:
>>> from volume import PathAttributes
>>> path = "/path/to/file"
>>> obj = PathAttributes(path)
>>> obj.mount_point()
"/"
>>> obj.supports_clonefile()
True
"""
from .properties import PathAttributes
from .copy import can_copy_on_write, generate_copy_arguments

View File

@@ -0,0 +1,35 @@
"""
copy.py: Generate performant '/bin/cp' arguments for macOS
"""
from pathlib import Path
from .properties import PathAttributes
def can_copy_on_write(source: str, destination: str) -> bool:
"""
Check if Copy on Write is supported between source and destination
"""
source_obj = PathAttributes(source)
return source_obj.mount_point() == PathAttributes(str(Path(destination).parent)).mount_point() and source_obj.supports_clonefile()
def generate_copy_arguments(source: str, destination: str) -> list:
"""
Generate performant '/bin/cp' arguments for macOS
"""
_command = ["/bin/cp", source, destination]
if not Path(source).exists():
raise FileNotFoundError(f"Source file not found: {source}")
if not Path(destination).parent.exists():
raise FileNotFoundError(f"Destination directory not found: {destination}")
# Check if Copy on Write is supported.
if can_copy_on_write(source, destination):
_command.insert(1, "-c")
if Path(source).is_dir():
_command.insert(1, "-R")
return _command

View File

@@ -0,0 +1,110 @@
"""
properties.py: Query volume properties for a given path using macOS's getattrlist.
"""
import ctypes
class attrreference_t(ctypes.Structure):
_fields_ = [
("attr_dataoffset", ctypes.c_int32),
("attr_length", ctypes.c_uint32)
]
class attrlist_t(ctypes.Structure):
_fields_ = [
("bitmapcount", ctypes.c_ushort),
("reserved", ctypes.c_uint16),
("commonattr", ctypes.c_uint),
("volattr", ctypes.c_uint),
("dirattr", ctypes.c_uint),
("fileattr", ctypes.c_uint),
("forkattr", ctypes.c_uint)
]
class volattrbuf(ctypes.Structure):
_fields_ = [
("length", ctypes.c_uint32),
("mountPoint", attrreference_t),
("volCapabilities", ctypes.c_uint64),
("mountPointSpace", ctypes.c_char * 1024),
]
class PathAttributes:
def __init__(self, path: str) -> None:
self._path = path
if not isinstance(self._path, str):
try:
self._path = str(self._path)
except:
raise ValueError(f"Invalid path: {path}")
_libc = ctypes.CDLL("/usr/lib/libc.dylib")
# Reference:
# https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man2/getattrlist.2.html
try:
self._getattrlist = _libc.getattrlist
except AttributeError:
return
self._getattrlist.argtypes = [
ctypes.c_char_p, # Path
ctypes.POINTER(attrlist_t), # Attribute list
ctypes.c_void_p, # Attribute buffer
ctypes.c_ulong, # Attribute buffer size
ctypes.c_ulong # Options
]
self._getattrlist.restype = ctypes.c_int
# Reference:
# https://github.com/apple-oss-distributions/xnu/blob/xnu-10063.121.3/bsd/sys/attr.h
ATTR_BIT_MAP_COUNT = 0x00000005
ATTR_VOL_MOUNTPOINT = 0x00001000
ATTR_VOL_CAPABILITIES = 0x00020000
attrList = attrlist_t()
attrList.bitmapcount = ATTR_BIT_MAP_COUNT
attrList.volattr = ATTR_VOL_MOUNTPOINT | ATTR_VOL_CAPABILITIES
volAttrBuf = volattrbuf()
if self._getattrlist(self._path.encode(), ctypes.byref(attrList), ctypes.byref(volAttrBuf), ctypes.sizeof(volAttrBuf), 0) != 0:
return
self._volAttrBuf = volAttrBuf
def supports_clonefile(self) -> bool:
"""
Verify if path provided supports Apple's clonefile function.
Equivalent to checking for Copy on Write support.
"""
VOL_CAP_INT_CLONE = 0x00010000
if not hasattr(self, "_volAttrBuf"):
return False
if self._volAttrBuf.volCapabilities & VOL_CAP_INT_CLONE:
return True
return False
def mount_point(self) -> str:
"""
Return mount point of path.
"""
if not hasattr(self, "_volAttrBuf"):
return ""
mount_point_ptr = ctypes.cast(
ctypes.addressof(self._volAttrBuf.mountPoint) + self._volAttrBuf.mountPoint.attr_dataoffset,
ctypes.POINTER(ctypes.c_char * self._volAttrBuf.mountPoint.attr_length)
)
return mount_point_ptr.contents.value.decode()

View File

@@ -78,7 +78,7 @@ class BuildFrame(wx.Frame):
self.install_button = install_button self.install_button = install_button
# Read-only text box: {empty} # Read-only text box: {empty}
text_box = wx.TextCtrl(frame, value="", pos=(-1, install_button.GetPosition()[1] + install_button.GetSize()[1] + 10), size=(400, 350), style=wx.TE_READONLY | wx.TE_MULTILINE | wx.TE_RICH2) text_box = wx.TextCtrl(frame, value="", pos=(-1, install_button.GetPosition()[1] + install_button.GetSize()[1] + 10), size=(380, 350), style=wx.TE_READONLY | wx.TE_MULTILINE | wx.TE_RICH2)
text_box.Centre(wx.HORIZONTAL) text_box.Centre(wx.HORIZONTAL)
self.text_box = text_box self.text_box = text_box

View File

@@ -12,7 +12,7 @@ from Cocoa import NSApp, NSApplication
from .. import constants from .. import constants
from ..sys_patch import sys_patch_detect from ..sys_patch.detections import DetectRootPatch
from ..wx_gui import ( from ..wx_gui import (
gui_cache_os_update, gui_cache_os_update,
@@ -64,7 +64,7 @@ class EntryPoint:
if "--gui_patch" in sys.argv or "--gui_unpatch" in sys.argv or start_patching is True : if "--gui_patch" in sys.argv or "--gui_unpatch" in sys.argv or start_patching is True :
entry = gui_sys_patch_start.SysPatchStartFrame entry = gui_sys_patch_start.SysPatchStartFrame
patches = sys_patch_detect.DetectRootPatch(self.constants.computer.real_model, self.constants).detect_patch_set() patches = DetectRootPatch(self.constants.computer.real_model, self.constants).detect_patch_set()
logging.info(f"Entry point set: {entry.__name__}") logging.info(f"Entry point set: {entry.__name__}")

View File

@@ -252,7 +252,7 @@ class InstallOCFrame(wx.Frame):
text_label.Centre(wx.HORIZONTAL) text_label.Centre(wx.HORIZONTAL)
# Read-only text box: {empty} # Read-only text box: {empty}
text_box = wx.TextCtrl(dialog, value="", pos=(-1, text_label.GetPosition()[1] + text_label.GetSize()[1] + 10), size=(370, 200), style=wx.TE_READONLY | wx.TE_MULTILINE | wx.TE_RICH2) text_box = wx.TextCtrl(dialog, value="", pos=(-1, text_label.GetPosition()[1] + text_label.GetSize()[1] + 10), size=(350, 200), style=wx.TE_READONLY | wx.TE_MULTILINE | wx.TE_RICH2)
text_box.Centre(wx.HORIZONTAL) text_box.Centre(wx.HORIZONTAL)
self.text_box = text_box self.text_box = text_box

View File

@@ -10,7 +10,10 @@ import webbrowser
from pathlib import Path from pathlib import Path
from .. import constants from .. import (
constants,
sucatalog
)
from ..datasets import ( from ..datasets import (
os_data, os_data,
@@ -46,7 +49,7 @@ class macOSInstallerDownloadFrame(wx.Frame):
self.available_installers = None self.available_installers = None
self.available_installers_latest = None self.available_installers_latest = None
self.catalog_seed: macos_installer_handler.SeedType = macos_installer_handler.SeedType.DeveloperSeed self.catalog_seed: sucatalog.SeedType = sucatalog.SeedType.DeveloperSeed
self.frame_modal = wx.Dialog(parent, title=title, size=(330, 200)) self.frame_modal = wx.Dialog(parent, title=title, size=(330, 200))
@@ -132,10 +135,16 @@ class macOSInstallerDownloadFrame(wx.Frame):
# Grab installer catalog # Grab installer catalog
def _fetch_installers(): def _fetch_installers():
logging.info(f"Fetching installer catalog: {macos_installer_handler.SeedType(self.catalog_seed).name}") logging.info(f"Fetching installer catalog: {sucatalog.SeedType.DeveloperSeed.name}")
remote_obj = macos_installer_handler.RemoteInstallerCatalog(seed_override=self.catalog_seed)
self.available_installers = remote_obj.available_apps sucatalog_contents = sucatalog.CatalogURL(seed=sucatalog.SeedType.DeveloperSeed).url_contents
self.available_installers_latest = remote_obj.available_apps_latest if sucatalog_contents is None:
logging.error("Failed to download Installer Catalog from Apple")
return
self.available_installers = sucatalog.CatalogProducts(sucatalog_contents).products
self.available_installers_latest = sucatalog.CatalogProducts(sucatalog_contents).latest_products
thread = threading.Thread(target=_fetch_installers) thread = threading.Thread(target=_fetch_installers)
thread.start() thread.start()
@@ -157,7 +166,7 @@ class macOSInstallerDownloadFrame(wx.Frame):
bundles = [wx.BitmapBundle.FromBitmaps(icon) for icon in self.icons] bundles = [wx.BitmapBundle.FromBitmaps(icon) for icon in self.icons]
self.frame_modal.Destroy() self.frame_modal.Destroy()
self.frame_modal = wx.Dialog(self, title="Select macOS Installer", size=(460, 500)) self.frame_modal = wx.Dialog(self, title="Select macOS Installer", size=(505, 500))
# Title: Select macOS Installer # Title: Select macOS Installer
title_label = wx.StaticText(self.frame_modal, label="Select macOS Installer", pos=(-1,-1)) title_label = wx.StaticText(self.frame_modal, label="Select macOS Installer", pos=(-1,-1))
@@ -169,35 +178,31 @@ class macOSInstallerDownloadFrame(wx.Frame):
self.list = wx.ListCtrl(self.frame_modal, id, style=wx.LC_REPORT | wx.LC_SINGLE_SEL | wx.LC_NO_HEADER | wx.BORDER_SUNKEN) self.list = wx.ListCtrl(self.frame_modal, id, style=wx.LC_REPORT | wx.LC_SINGLE_SEL | wx.LC_NO_HEADER | wx.BORDER_SUNKEN)
self.list.SetSmallImages(bundles) self.list.SetSmallImages(bundles)
self.list.InsertColumn(0, "Version") self.list.InsertColumn(0, "Title", width=175)
self.list.InsertColumn(1, "Size") self.list.InsertColumn(1, "Version", width=50)
self.list.InsertColumn(2, "Release Date") self.list.InsertColumn(2, "Build", width=75)
self.list.InsertColumn(3, "Size", width=75)
self.list.InsertColumn(4, "Release Date", width=100)
installers = self.available_installers_latest if show_full is False else self.available_installers installers = self.available_installers_latest if show_full is False else self.available_installers
if show_full is False: if show_full is False:
self.frame_modal.SetSize((460, 370)) self.frame_modal.SetSize((490, 370))
if installers: if installers:
locale.setlocale(locale.LC_TIME, '') locale.setlocale(locale.LC_TIME, '')
logging.info(f"Available installers on SUCatalog ({'All entries' if show_full else 'Latest only'}):") logging.info(f"Available installers on SUCatalog ({'All entries' if show_full else 'Latest only'}):")
for item in installers: for item in installers:
extra = " Beta" if installers[item]['Variant'] in ["DeveloperSeed" , "PublicSeed"] else "" logging.info(f"- {item['Title']} ({item['Version']} - {item['Build']}):\n - Size: {utilities.human_fmt(item['InstallAssistant']['Size'])}\n - Link: {item['InstallAssistant']['URL']}\n")
logging.info(f"- macOS {installers[item]['Version']} ({installers[item]['Build']}):\n - Size: {utilities.human_fmt(installers[item]['Size'])}\n - Source: {installers[item]['Source']}\n - Variant: {installers[item]['Variant']}\n - Link: {installers[item]['Link']}\n") index = self.list.InsertItem(self.list.GetItemCount(), f"{item['Title']}")
index = self.list.InsertItem(self.list.GetItemCount(), f"macOS {installers[item]['Version']} {os_data.os_conversion.convert_kernel_to_marketing_name(int(installers[item]['Build'][:2]))}{extra} ({installers[item]['Build']})") self.list.SetItemImage(index, self._macos_version_to_icon(int(item['Build'][:2])))
self.list.SetItemImage(index, self._macos_version_to_icon(int(installers[item]['Build'][:2]))) self.list.SetItem(index, 1, item['Version'])
self.list.SetItem(index, 1, utilities.human_fmt(installers[item]['Size'])) self.list.SetItem(index, 2, item['Build'])
self.list.SetItem(index, 2, installers[item]['Date'].strftime("%x")) self.list.SetItem(index, 3, utilities.human_fmt(item['InstallAssistant']['Size']))
self.list.SetItem(index, 4, item['PostDate'].strftime("%x"))
else: else:
logging.error("No installers found on SUCatalog") logging.error("No installers found on SUCatalog")
wx.MessageDialog(self.frame_modal, "Failed to download Installer Catalog from Apple", "Error", wx.OK | wx.ICON_ERROR).ShowModal() wx.MessageDialog(self.frame_modal, "Failed to download Installer Catalog from Apple", "Error", wx.OK | wx.ICON_ERROR).ShowModal()
self.list.SetColumnWidth(0, 280)
self.list.SetColumnWidth(1, 65)
if show_full is True:
self.list.SetColumnWidth(2, 80)
else:
self.list.SetColumnWidth(2, 94) # Hack to get the highlight to fill the ListCtrl
if show_full is False: if show_full is False:
self.list.Select(-1) self.list.Select(-1)
@@ -256,7 +261,7 @@ class macOSInstallerDownloadFrame(wx.Frame):
if not clipboard.IsOpened(): if not clipboard.IsOpened():
clipboard.Open() clipboard.Open()
clipboard.SetData(wx.TextDataObject(list(installers.values())[selected_item]['Link'])) clipboard.SetData(wx.TextDataObject(installers[selected_item]['InstallAssistant']['URL']))
clipboard.Close() clipboard.Close()
@@ -278,14 +283,15 @@ class macOSInstallerDownloadFrame(wx.Frame):
selected_item = self.list.GetFirstSelected() selected_item = self.list.GetFirstSelected()
if selected_item != -1: if selected_item != -1:
selected_installer = installers[selected_item]
logging.info(f"Selected macOS {list(installers.values())[selected_item]['Version']} ({list(installers.values())[selected_item]['Build']})") logging.info(f"Selected macOS {selected_installer['Version']} ({selected_installer['Build']})")
# Notify user whether their model is compatible with the selected installer # Notify user whether their model is compatible with the selected installer
problems = [] problems = []
model = self.constants.custom_model or self.constants.computer.real_model model = self.constants.custom_model or self.constants.computer.real_model
if model in smbios_data.smbios_dictionary: if model in smbios_data.smbios_dictionary:
if list(installers.values())[selected_item]["OS"] >= os_data.os_data.ventura: if selected_installer["InstallAssistant"]["XNUMajor"] >= os_data.os_data.ventura:
if smbios_data.smbios_dictionary[model]["CPU Generation"] <= cpu_data.CPUGen.penryn or model in ["MacPro4,1", "MacPro5,1", "Xserve3,1"]: if smbios_data.smbios_dictionary[model]["CPU Generation"] <= cpu_data.CPUGen.penryn or model in ["MacPro4,1", "MacPro5,1", "Xserve3,1"]:
if model.startswith("MacBook"): if model.startswith("MacBook"):
problems.append("Lack of internal Keyboard/Trackpad in macOS installer.") problems.append("Lack of internal Keyboard/Trackpad in macOS installer.")
@@ -293,7 +299,7 @@ class macOSInstallerDownloadFrame(wx.Frame):
problems.append("Lack of internal Keyboard/Mouse in macOS installer.") problems.append("Lack of internal Keyboard/Mouse in macOS installer.")
if problems: if problems:
logging.warning(f"Potential issues with {model} and {list(installers.values())[selected_item]['Version']} ({list(installers.values())[selected_item]['Build']}): {problems}") logging.warning(f"Potential issues with {model} and {selected_installer['Version']} ({selected_installer['Build']}): {problems}")
problems = "\n".join(problems) problems = "\n".join(problems)
dlg = wx.MessageDialog(self.frame_modal, f"Your model ({model}) may not be fully supported by this installer. You may encounter the following issues:\n\n{problems}\n\nFor more information, see associated page. Otherwise, we recommend using macOS Monterey", "Potential Issues", wx.YES_NO | wx.CANCEL | wx.ICON_WARNING) dlg = wx.MessageDialog(self.frame_modal, f"Your model ({model}) may not be fully supported by this installer. You may encounter the following issues:\n\n{problems}\n\nFor more information, see associated page. Otherwise, we recommend using macOS Monterey", "Potential Issues", wx.YES_NO | wx.CANCEL | wx.ICON_WARNING)
dlg.SetYesNoCancelLabels("View Github Issue", "Download Anyways", "Cancel") dlg.SetYesNoCancelLabels("View Github Issue", "Download Anyways", "Cancel")
@@ -305,7 +311,7 @@ class macOSInstallerDownloadFrame(wx.Frame):
return return
host_space = utilities.get_free_space() host_space = utilities.get_free_space()
needed_space = list(installers.values())[selected_item]['Size'] * 2 needed_space = selected_installer['InstallAssistant']['Size'] * 2
if host_space < needed_space: if host_space < needed_space:
logging.error(f"Insufficient space to download and extract: {utilities.human_fmt(host_space)} available vs {utilities.human_fmt(needed_space)} required") logging.error(f"Insufficient space to download and extract: {utilities.human_fmt(host_space)} available vs {utilities.human_fmt(needed_space)} required")
dlg = wx.MessageDialog(self.frame_modal, f"You do not have enough free space to download and extract this installer. Please free up some space and try again\n\n{utilities.human_fmt(host_space)} available vs {utilities.human_fmt(needed_space)} required", "Insufficient Space", wx.OK | wx.ICON_WARNING) dlg = wx.MessageDialog(self.frame_modal, f"You do not have enough free space to download and extract this installer. Please free up some space and try again\n\n{utilities.human_fmt(host_space)} available vs {utilities.human_fmt(needed_space)} required", "Insufficient Space", wx.OK | wx.ICON_WARNING)
@@ -314,22 +320,22 @@ class macOSInstallerDownloadFrame(wx.Frame):
self.frame_modal.Close() self.frame_modal.Close()
download_obj = network_handler.DownloadObject(list(installers.values())[selected_item]['Link'], self.constants.payload_path / "InstallAssistant.pkg") download_obj = network_handler.DownloadObject(selected_installer['InstallAssistant']['URL'], self.constants.payload_path / "InstallAssistant.pkg")
gui_download.DownloadFrame( gui_download.DownloadFrame(
self, self,
title=self.title, title=self.title,
global_constants=self.constants, global_constants=self.constants,
download_obj=download_obj, download_obj=download_obj,
item_name=f"macOS {list(installers.values())[selected_item]['Version']} ({list(installers.values())[selected_item]['Build']})", item_name=f"macOS {selected_installer['Version']} ({selected_installer['Build']})",
download_icon=self.constants.icons_path[self._macos_version_to_icon(int(list(installers.values())[selected_item]['Build'][:2]))] download_icon=self.constants.icons_path[self._macos_version_to_icon(selected_installer["InstallAssistant"]["XNUMajor"])]
) )
if download_obj.download_complete is False: if download_obj.download_complete is False:
self.on_return_to_main_menu() self.on_return_to_main_menu()
return return
self._validate_installer(list(installers.values())[selected_item]['integrity']) self._validate_installer(selected_installer['InstallAssistant']['IntegrityDataURL'])
def _validate_installer(self, chunklist_link: str) -> None: def _validate_installer(self, chunklist_link: str) -> None:

View File

@@ -15,6 +15,7 @@ from pathlib import Path
from .. import constants from .. import constants
from ..datasets import os_data from ..datasets import os_data
from ..volume import generate_copy_arguments
from ..wx_gui import ( from ..wx_gui import (
gui_main_menu, gui_main_menu,
@@ -460,7 +461,7 @@ class macOSInstallerFlashFrame(wx.Frame):
return return
subprocess.run(["/bin/mkdir", "-p", f"{path}/Library/Packages/"]) subprocess.run(["/bin/mkdir", "-p", f"{path}/Library/Packages/"])
subprocess.run(["/bin/cp", "-r", self.constants.installer_pkg_path, f"{path}/Library/Packages/"]) subprocess.run(generate_copy_arguments(self.constants.installer_pkg_path, f"{path}/Library/Packages/"))
self._kdk_chainload(os_version["ProductBuildVersion"], os_version["ProductVersion"], Path(path + "/Library/Packages/")) self._kdk_chainload(os_version["ProductBuildVersion"], os_version["ProductVersion"], Path(path + "/Library/Packages/"))
@@ -530,7 +531,7 @@ class macOSInstallerFlashFrame(wx.Frame):
return return
logging.info("Copying KDK") logging.info("Copying KDK")
subprocess.run(["/bin/cp", "-r", f"{mount_point}/KernelDebugKit.pkg", kdk_pkg_path]) subprocess.run(generate_copy_arguments(f"{mount_point}/KernelDebugKit.pkg", kdk_pkg_path))
logging.info("Unmounting KDK") logging.info("Unmounting KDK")
result = subprocess.run(["/usr/bin/hdiutil", "detach", mount_point], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) result = subprocess.run(["/usr/bin/hdiutil", "detach", mount_point], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

View File

@@ -11,9 +11,6 @@ import requests
import markdown2 import markdown2
import threading import threading
import webbrowser import webbrowser
import subprocess
from pathlib import Path
from .. import constants from .. import constants

View File

@@ -2,7 +2,6 @@
gui_settings.py: Settings Frame for the GUI gui_settings.py: Settings Frame for the GUI
""" """
import os
import wx import wx
import wx.adv import wx.adv
import pprint import pprint
@@ -24,8 +23,7 @@ from ..support import (
global_settings, global_settings,
defaults, defaults,
generate_smbios, generate_smbios,
network_handler, network_handler
subprocess_wrapper
) )
from ..datasets import ( from ..datasets import (
model_array, model_array,

View File

@@ -2,7 +2,6 @@
gui_support.py: Utilities for interacting with wxPython GUI gui_support.py: Utilities for interacting with wxPython GUI
""" """
import os
import wx import wx
import sys import sys
import time import time
@@ -20,7 +19,6 @@ from . import gui_about
from .. import constants from .. import constants
from ..detections import device_probe from ..detections import device_probe
from ..support import subprocess_wrapper
from ..datasets import ( from ..datasets import (
model_array, model_array,

View File

@@ -2,7 +2,6 @@
gui_sys_patch_display.py: Display root patching menu gui_sys_patch_display.py: Display root patching menu
""" """
import os
import wx import wx
import logging import logging
import plistlib import plistlib
@@ -12,7 +11,7 @@ from pathlib import Path
from .. import constants from .. import constants
from ..sys_patch import sys_patch_detect from ..sys_patch.detections import DetectRootPatch
from ..wx_gui import ( from ..wx_gui import (
gui_main_menu, gui_main_menu,
@@ -87,7 +86,7 @@ class SysPatchDisplayFrame(wx.Frame):
patches: dict = {} patches: dict = {}
def _fetch_patches(self) -> None: def _fetch_patches(self) -> None:
nonlocal patches nonlocal patches
patches = sys_patch_detect.DetectRootPatch(self.constants.computer.real_model, self.constants).detect_patch_set() patches = DetectRootPatch(self.constants.computer.real_model, self.constants).detect_patch_set()
thread = threading.Thread(target=_fetch_patches, args=(self,)) thread = threading.Thread(target=_fetch_patches, args=(self,))
thread.start() thread.start()
@@ -111,7 +110,7 @@ class SysPatchDisplayFrame(wx.Frame):
if not any(not patch.startswith("Settings") and not patch.startswith("Validation") and patches[patch] is True for patch in patches): if not any(not patch.startswith("Settings") and not patch.startswith("Validation") and patches[patch] is True for patch in patches):
logging.info("No applicable patches available") logging.info("No applicable patches available")
patches = [] patches = {}
# Check if OCLP has already applied the same patches # Check if OCLP has already applied the same patches
no_new_patches = not self._check_if_new_patches_needed(patches) if patches else False no_new_patches = not self._check_if_new_patches_needed(patches) if patches else False

View File

@@ -20,7 +20,6 @@ from ..support import kdk_handler
from ..sys_patch import ( from ..sys_patch import (
sys_patch, sys_patch,
sys_patch_detect
) )
from ..wx_gui import ( from ..wx_gui import (
gui_main_menu, gui_main_menu,
@@ -28,6 +27,8 @@ from ..wx_gui import (
gui_download, gui_download,
) )
from ..sys_patch.detections import DetectRootPatch
class SysPatchStartFrame(wx.Frame): class SysPatchStartFrame(wx.Frame):
@@ -50,7 +51,7 @@ class SysPatchStartFrame(wx.Frame):
self.Centre() self.Centre()
if self.patches == {}: if self.patches == {}:
self.patches = sys_patch_detect.DetectRootPatch(self.constants.computer.real_model, self.constants).detect_patch_set() self.patches = DetectRootPatch(self.constants.computer.real_model, self.constants).detect_patch_set()
def _kdk_download(self, frame: wx.Frame = None) -> bool: def _kdk_download(self, frame: wx.Frame = None) -> bool:
@@ -198,7 +199,7 @@ class SysPatchStartFrame(wx.Frame):
# Text box # Text box
text_box = wx.TextCtrl(dialog, pos=(10, patch_label.GetPosition()[1] + 30), size=(400, 400), style=wx.TE_READONLY | wx.TE_MULTILINE | wx.TE_RICH2) text_box = wx.TextCtrl(dialog, pos=(10, patch_label.GetPosition()[1] + 30), size=(380, 400), style=wx.TE_READONLY | wx.TE_MULTILINE | wx.TE_RICH2)
text_box.SetFont(gui_support.font_factory(13, wx.FONTWEIGHT_NORMAL)) text_box.SetFont(gui_support.font_factory(13, wx.FONTWEIGHT_NORMAL))
text_box.Centre(wx.HORIZONTAL) text_box.Centre(wx.HORIZONTAL)
self.text_box = text_box self.text_box = text_box

View File

@@ -6,7 +6,6 @@ import wx
import sys import sys
import time import time
import logging import logging
import datetime
import threading import threading
import subprocess import subprocess

View File

@@ -3,7 +3,9 @@
<plist version="1.0"> <plist version="1.0">
<dict> <dict>
<key>AssociatedBundleIdentifiers</key> <key>AssociatedBundleIdentifiers</key>
<string>com.dortania.opencore-legacy-patcher</string> <array>
<string>com.dortania.opencore-legacy-patcher</string>
</array>
<key>Label</key> <key>Label</key>
<string>com.dortania.opencore-legacy-patcher.auto-patch</string> <string>com.dortania.opencore-legacy-patcher.auto-patch</string>
<key>ProgramArguments</key> <key>ProgramArguments</key>

View File

@@ -3,7 +3,9 @@
<plist version="1.0"> <plist version="1.0">
<dict> <dict>
<key>AssociatedBundleIdentifiers</key> <key>AssociatedBundleIdentifiers</key>
<string>com.dortania.opencore-legacy-patcher</string> <array>
<string>com.dortania.opencore-legacy-patcher</string>
</array>
<key>Label</key> <key>Label</key>
<string>com.dortania.opencore-legacy-patcher.macos-update</string> <string>com.dortania.opencore-legacy-patcher.macos-update</string>
<key>ProgramArguments</key> <key>ProgramArguments</key>

View File

@@ -3,7 +3,9 @@
<plist version="1.0"> <plist version="1.0">
<dict> <dict>
<key>AssociatedBundleIdentifiers</key> <key>AssociatedBundleIdentifiers</key>
<string>com.dortania.opencore-legacy-patcher</string> <array>
<string>com.dortania.opencore-legacy-patcher</string>
</array>
<key>Label</key> <key>Label</key>
<string>com.dortania.opencore-legacy-patcher.rsr-monitor</string> <string>com.dortania.opencore-legacy-patcher.rsr-monitor</string>
<key>ProgramArguments</key> <key>ProgramArguments</key>

View File

@@ -3,7 +3,9 @@
<plist version="1.0"> <plist version="1.0">
<dict> <dict>
<key>AssociatedBundleIdentifiers</key> <key>AssociatedBundleIdentifiers</key>
<string>com.dortania.opencore-legacy-patcher</string> <array>
<string>com.dortania.opencore-legacy-patcher</string>
</array>
<key>Label</key> <key>Label</key>
<string>com.dortania.opencore-legacy-patcher.rsr-monitor</string> <string>com.dortania.opencore-legacy-patcher.rsr-monitor</string>
<key>ProgramArguments</key> <key>ProgramArguments</key>