| d2d20701 | 30-Nov-2021 |
Matthew Dillon <dillon@apollo.backplane.com> |
drm - Hack i915 to workaround startup crash
* Work around a startup crash (at least on my i5-6500) caused by a ring timeout and request-not-completed. Replace the BUG_ON with a warning and docu
drm - Hack i915 to workaround startup crash
* Work around a startup crash (at least on my i5-6500) caused by a ring timeout and request-not-completed. Replace the BUG_ON with a warning and document the hack.
show more ...
|
| 40e07378 | 08-Nov-2021 |
Sascha Wildner <saw@online.de> |
kernel/amdgpu: Remove -DLITTLEENDIAN_CPU from the Makefile.
Maybe this was needed during porting at some point but it all resolves fine now via amdgpu's os_types.h which defines it based on the __{B
kernel/amdgpu: Remove -DLITTLEENDIAN_CPU from the Makefile.
Maybe this was needed during porting at some point but it all resolves fine now via amdgpu's os_types.h which defines it based on the __{BIG,LITTLE}_ENDIAN defines that we have in based.
In-discussion-with: Sergey Zigachev <s.zi@outlook.com>
show more ...
|
| ef7b4fc3 | 08-Nov-2021 |
Matthew Dillon <dillon@apollo.backplane.com> |
drm - Increase hacked stolen framebuffer memory for vega9 (2400G, 3550H, ...)
* Increase from 9MB to 64MB, fixing a amdgpu crash on kldload when a 4K monitor is attached.
The assignment being f
drm - Increase hacked stolen framebuffer memory for vega9 (2400G, 3550H, ...)
* Increase from 9MB to 64MB, fixing a amdgpu crash on kldload when a 4K monitor is attached.
The assignment being fixed is already a hack as of linux 4.19. Hack it some more.
show more ...
|
| 3dbf45c4 | 04-Nov-2021 |
Sascha Wildner <saw@online.de> |
kernel: Fix LINT64 build.
* Add a needed file to sys/conf/files.
* Shield the drm code from 'DEBUG' being defined via opt_global.h (DEBUG is a kernel configuration option). This was breaking amdg
kernel: Fix LINT64 build.
* Add a needed file to sys/conf/files.
* Shield the drm code from 'DEBUG' being defined via opt_global.h (DEBUG is a kernel configuration option). This was breaking amdgpu's atom.c that defines its own 'DEBUG' macro.
show more ...
|
| e9dbfea1 | 21-Mar-2021 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Add kmalloc_obj subsystem step 1
* Implement per-zone memory management to kmalloc() in the form of kmalloc_obj() and friends. Currently the subsystem uses the same malloc_type struct
kernel - Add kmalloc_obj subsystem step 1
* Implement per-zone memory management to kmalloc() in the form of kmalloc_obj() and friends. Currently the subsystem uses the same malloc_type structure but is otherwise distinct from the normal kmalloc(), so to avoid programming mistakes the *_obj() subsystem post-pends '_obj' to malloc_type pointers passed into it.
This mechanism will eventually replace objcache. This mechanism is designed to greatly reduce fragmentation issues on systems with long uptimes.
Eventually the feature will be better integrated and I will be able to remove the _obj stuff.
* This is a object allocator, so the zone must be dedicated to one type of object with a fixed size. All allocations out of the zone are of the object.
The allocator is not quite type-stable yet, but will be once existential locks are integrated into the freeing mechanism.
* Implement a mini-slab allocator for management. Since the zones are single-object, similar to objcache, the fixed-size mini-slabs are a lot easier to optimize and much simpler in construction than the main kernel slab allocator.
Uses a per-zone/per-cpu active/alternate slab with an ultra-optimized allocation path, and a per-zone partial/full/empty list.
Also has a globaldata-based per-cpu cache of free slabs. The mini-slab allocator frees slabs back to the same cpu they were originally allocated from in order to retain memory locality over time.
* Implement a passive cleanup poller. This currently polls kmalloc zones very slowly looking for excess full slabs to release back to the global slab cache or the system (if the global slab cache is full).
This code will ultimately also handle existential type-stable freeing.
* Fragmentation is greatly reduced due to the distinct zones. Slabs are dedicated to the zone and do not share allocation space with other zones. Also, when a zone is destroyed, all of its memory is cleanly disposed of and there will be no left-over fragmentation.
* Initially use the new interface for the following. These zones tend to or can become quite big:
vnodes namecache (but not related strings) hammer2 chains hammer2 inodes tmpfs nodes tmpfs dirents (but not related strings)
show more ...
|