History log of /dflybsd-src/sys/vfs/procfs/procfs_map.c (Results 1 – 25 of 33)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.4.0, v6.4.0rc1, v6.5.0, v6.2.2, v6.2.1, v6.3.0, v6.0.1, v6.0.0, v6.0.0rc1, v6.1.0
# 4d4f84f5 07-Jan-2021 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Remove MAP_VPAGETABLE

* This will break vkernel support for now, but after a lot of mulling
there's just no other way forward. MAP_VPAGETABLE was basically a
software page-table featur

kernel - Remove MAP_VPAGETABLE

* This will break vkernel support for now, but after a lot of mulling
there's just no other way forward. MAP_VPAGETABLE was basically a
software page-table feature for mmap()s that allowed the vkernel
to implement page tables without needing hardware virtualization support.

* The basic problem is that the VM system is moving to an extent-based
mechanism for tracking VM pages entered into PMAPs and is no longer
indexing individual terminal PTEs with pv_entry's.

This means that the VM system is no longer able to get an exact list of
PTEs in PMAPs that a particular vm_page is using. It just has a
flag 'this page is in at least one pmap' or 'this page is not in any
pmaps'. To track down the PTEs, the VM system must run through the
extents via the vm_map_backing structures hanging off the related
VM object.

This mechanism does not work with MAP_VPAGETABLE. Short of scanning
the entire real pmap, the kernel has no way to reverse-index a page
that might be indirected through MAP_VPAGETABLE.

* We will need actual hardware mmu virtualization to get the vkernel
working again.

show more ...


Revision tags: v5.8.3, v5.8.2, v5.8.1, v5.8.0, v5.9.0, v5.8.0rc1, v5.6.3
# e2164e29 18-Oct-2019 zrj <rimvydas.jasinskas@gmail.com>

<sys/slaballoc.h>: Switch to lighter <sys/_malloc.h> header.

The <sys/globaldata.h> embeds SLGlobalData that in turn embeds the
"struct malloc_type". Adjust several kernel sources for missing
in

<sys/slaballoc.h>: Switch to lighter <sys/_malloc.h> header.

The <sys/globaldata.h> embeds SLGlobalData that in turn embeds the
"struct malloc_type". Adjust several kernel sources for missing
includes where memory allocation is performed. Try to use alphabetical
include order.

Now (in most cases) <sys/malloc.h> is included after <sys/objcache.h>.
Once it gets cleaned up, the <sys/malloc.h> inclusion could be moved
out of <sys/idr.h> to drm Linux compat layer linux/slab.h without side
effects.

show more ...


# 13dd34d8 18-Oct-2019 zrj <rimvydas.jasinskas@gmail.com>

kernel: Cleanup <sys/uio.h> issues.

The iovec_free() inline very complicates this header inclusion. The
NULL check is not always seen from <sys/_null.h>. Luckily only three
kernel sources needs

kernel: Cleanup <sys/uio.h> issues.

The iovec_free() inline very complicates this header inclusion. The
NULL check is not always seen from <sys/_null.h>. Luckily only three
kernel sources needs it: kern_subr.c, sys_generic.c and uipc_syscalls.c.
Also just a single dev/drm source makes use of 'struct uio'.
* Include <sys/uio.h> explicitly first in drm_fops.c to avoid kfree()
macro override in drm compat layer.
* Use <sys/_uio.h> where only enums and struct uio is needed, but ensure
that userland will not include it for possible later <sys/user.h> use.
* Stop using <sys/vnode.h> as shortcut for uiomove*() prototypes. The
uiomove*() family functions possibly transfer data across kernel/user
space boundary. This header presence explicitly mark sources as such.
* Prefer to add <sys/uio.h> after <sys/systm.h>, but before <sys/proc.h>
and definitely before <sys/malloc.h> (except for 3 mentioned sources).
This will allow to remove <sys/malloc.h> from <sys/uio.h> later on.
* Adjust <sys/user.h> to use component headers instead of <sys/uio.h>.

While there, use opportunity for a minimal whitespace cleanup.

No functional differences observed in compiler intermediates.

show more ...


Revision tags: v5.6.2, v5.6.1, v5.6.0, v5.6.0rc1, v5.7.0, v5.4.3
# 67e7cb85 14-May-2019 Matthew Dillon <dillon@apollo.backplane.com>

kernel - VM rework part 8 - Precursor work for terminal pv_entry removal

* Adjust structures so the pmap code can iterate backing_ba's with
just the vm_object spinlock.

Add a ba.pmap back-point

kernel - VM rework part 8 - Precursor work for terminal pv_entry removal

* Adjust structures so the pmap code can iterate backing_ba's with
just the vm_object spinlock.

Add a ba.pmap back-pointer.

Move entry->start and entry->end into the ba (ba.start, ba.end).
This is replicative of the base entry->ba.start and entry->ba.end,
but local modifications are locked by individual objects to allow
pmap ops to just look at backing ba's iterated via the object.

Remove the entry->map back-pointer.

Remove the ba.entry_base back-pointer.

* ba.offset is now an absolute offset and not additive. Adjust all code
that calculates and uses ba.offset (fortunately it is all concentrated
in vm_map.c and vm_fault.c).

* Refactor ba.start/offset/end modificatons to be atomic with
the necessary spin-locks to allow the pmap code to safely iterate
the vm_map_backing list for a vm_object.

* Test VM system with full synth run.

show more ...


# 44293a80 09-May-2019 Matthew Dillon <dillon@apollo.backplane.com>

kernel - VM rework part 3 - Cleanup pass

* Cleanup various structures and code


# 9de48ead 09-May-2019 Matthew Dillon <dillon@apollo.backplane.com>

kernel - VM rework part 2 - Replace backing_object with backing_ba

* Remove the vm_object based backing_object chains and all related
chaining code.

This removes an enormous number of locks fro

kernel - VM rework part 2 - Replace backing_object with backing_ba

* Remove the vm_object based backing_object chains and all related
chaining code.

This removes an enormous number of locks from the VM system and
also removes object-to-object dependencies which requires careful
traversal code. A great deal of complex code has been removed
and replaced with far simpler code.

Ultimately the intention will be to support removal of pv_entry
tracking from vm_pages to gain lockless shared faults, but that
is far in the future. It will require hanging vm_map_backing
structures off of a list based in the object.

* Implement the vm_map_backing structure which is embedded in the
vm_map_entry and then links to additional dynamically allocated
vm_map_backing structures via entry->ba.backing_ba. This structure
contains the object and offset and essentially takes over the
functionality that object->backing_object used to have.

backing objects are now handled via vm_map_backing. In this
commit, fork operations create a fan-in tree to shared subsets
of backings via vm_map_backing. In this particular commit,
these subsets are not collapsed in any way.

* Remove all the vm_map_split and collapse code. Every last line
is gone. It will be reimplemented using vm_map_backing in a
later commit.

This means that as-of this commit both recursive forks and
parent-to-multiple-children forks cause an accumulation of
inefficient lists of backing objects to occur in the parent
and children. This will begin to get addressed in part 3.

* The code no longer releases the vm_map lock (typically shared)
across (get_pages) I/O. There are no longer any chaining locks to
get in the way (hopefully). This means that the code does not
have to re-check as carefully as it did before. However, some
complexity will have to be added back in once we begin to address
the accumulation of vm_map_backing structures.

* Paging performance improved by 30-40%

show more ...


# 6f76a56d 07-May-2019 Matthew Dillon <dillon@apollo.backplane.com>

kernel - VM rework part 1 - Remove shadow_list

* Remove shadow_head, shadow_list, shadow_count.

* This leaves the kernel operational but without collapse optimizations
on 'other' processes when a

kernel - VM rework part 1 - Remove shadow_list

* Remove shadow_head, shadow_list, shadow_count.

* This leaves the kernel operational but without collapse optimizations
on 'other' processes when a prorgam exits.

show more ...


# 65cece21 03-May-2019 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Backout procfs map change

* Backout the addition of MAP_STACK 'STK' flagging in the map
output. It blows up gcore's /proc/pid/map scan (not procfs's
fault)... but lets not rock the boa

kernel - Backout procfs map change

* Backout the addition of MAP_STACK 'STK' flagging in the map
output. It blows up gcore's /proc/pid/map scan (not procfs's
fault)... but lets not rock the boat for now. The flag isn't
important.

show more ...


# 4837705e 03-May-2019 Matthew Dillon <dillon@apollo.backplane.com>

pthreads and kernel - change MAP_STACK operation

* Only allow new mmap()'s to intrude on ungrown MAP_STACK areas when
MAP_TRYFIXED is specified. This was not working as intended before.
Adjust

pthreads and kernel - change MAP_STACK operation

* Only allow new mmap()'s to intrude on ungrown MAP_STACK areas when
MAP_TRYFIXED is specified. This was not working as intended before.
Adjust the manual page to be more clear.

* Make kern.maxssiz (the maximum user stack size) visible via sysctl.

* Add kern.maxthrssiz, indicating the approximate space for placement
of pthread stacks. This defaults to 128GB.

* The old libthread_xu stack code did use TRYFIXED and will work
with the kernel changes, but change how it works to not assume
that the user stack should suddenly be limited to the pthread stack
size (~2MB).

Just use a normal mmap() now without TRYFIXED and a hint based on
kern.maxthrssiz (defaults to 512GB), calculating a starting address
hint that is (_usrstack - maxssiz - maxthrssiz).

* Adjust procfs to report MAP_STACK segments as 'STK'.

show more ...


Revision tags: v5.4.2
# 47ec0953 23-Mar-2019 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Refactor vm_map structure 1/2

* Remove the embedded vm_map_entry 'header' from vm_map.

* Remove the prev and next fields from vm_map_entry.

* Refactor the code to iterate only via the RB

kernel - Refactor vm_map structure 1/2

* Remove the embedded vm_map_entry 'header' from vm_map.

* Remove the prev and next fields from vm_map_entry.

* Refactor the code to iterate only via the RB tree. This is not as
optimal as the prev/next fields were, but we can improve the RB tree
code later to recover the performance.

show more ...


Revision tags: v5.4.1, v5.4.0, v5.5.0, v5.4.0rc1, v5.2.2, v5.2.1, v5.2.0, v5.3.0, v5.2.0rc, v5.0.2, v5.0.1, v5.0.0, v5.0.0rc2, v5.1.0, v5.0.0rc1, v4.8.1, v4.8.0, v4.6.2, v4.9.0, v4.8.0rc
# 558bda99 25-Jan-2017 Matthew Dillon <dillon@apollo.backplane.com>

procfs - don't try to count rss

* Don't try to count rss. Scanning individual pages for a 64-bit
mapping can take forever (literally!).

* Fixes problems accessing /proc/*/map for vkernel process

procfs - don't try to count rss

* Don't try to count rss. Scanning individual pages for a 64-bit
mapping can take forever (literally!).

* Fixes problems accessing /proc/*/map for vkernel processes.

show more ...


# 76f1911e 23-Jan-2017 Matthew Dillon <dillon@apollo.backplane.com>

kernel - pmap and vkernel work

* Remove the pmap.pm_token entirely. The pmap is currently protected
primarily by fine-grained locks and the vm_map lock. The intention
is to eventually be able

kernel - pmap and vkernel work

* Remove the pmap.pm_token entirely. The pmap is currently protected
primarily by fine-grained locks and the vm_map lock. The intention
is to eventually be able to protect it without the vm_map lock at all.

* Enhance pv_entry acquisition (representing PTE locations) to include
a placemarker facility for non-existant PTEs, allowing the PTE location
to be locked whether a pv_entry exists for it or not.

* Fix dev_dmmap (struct dev_mmap) (for future use), it was returning a
page index for physical memory as a 32-bit integer instead of a 64-bit
integer.

* Use pmap_kextract() instead of pmap_extract() where appropriate.

* Put the token contention test back in kern_clock.c for real kernels
so token contention shows up as sys% instead of idle%.

* Modify the pmap_extract() API to also return a locked pv_entry,
and add pmap_extract_done() to release it. Adjust users of
pmap_extract().

* Change madvise/mcontrol MADV_INVAL (used primarily by the vkernel)
to use a shared vm_map lock instead of an exclusive lock. This
significantly improves the vkernel's performance and significantly
reduces stalls and glitches when typing in one under heavy loads.

* The new placemarkers also have the side effect of fixing several
difficult-to-reproduce bugs in the pmap code, by ensuring that
shared and unmanaged pages are properly locked whereas before only
managed pages (with pv_entry's) were properly locked.

* Adjust the vkernel's pmap code to use atomic ops in numerous places.

* Rename the pmap_change_wiring() call to pmap_unwire(). The routine
was only being used to unwire (and could only safely be called for
unwiring anyway). Remove the unused 'wired' and the 'entry'
arguments.

Also change how pmap_unwire() works to remove a small race condition.

* Fix race conditions in the vmspace_*() system calls which could lead
to pmap corruption. Note that the vkernel did not trigger any of
these conditions, I found them while looking for another bug.

* Add missing maptypes to procfs's /proc/*/map report.

show more ...


Revision tags: v4.6.1, v4.6.0, v4.6.0rc2, v4.6.0rc, v4.7.0, v4.4.3, v4.4.2, v4.4.1, v4.4.0, v4.5.0, v4.4.0rc, v4.2.4, v4.3.1, v4.2.3, v4.2.1, v4.2.0, v4.0.6, v4.3.0, v4.2.0rc, v4.0.5, v4.0.4, v4.0.3, v4.0.2
# e86ad61a 27-Dec-2014 Markus Pfeiffer <markus.pfeiffer@morphism.de>

Allow reading with small uio->uio_resid or uio->uio_offset > 0 from /proc/X/map

Currently map file in procfs cannot be read with programs like cat
sometimes. It happens if the buffer to be read into

Allow reading with small uio->uio_resid or uio->uio_offset > 0 from /proc/X/map

Currently map file in procfs cannot be read with programs like cat
sometimes. It happens if the buffer to be read into cannot hold the entire
content of map file. Call to read returns EFBIG on this occasion, which is
somewhat confusing.

Submitted-By: Vasily Postnicov <shamaz.mazum@gmail.com>

show more ...


# fc8827e6 25-Nov-2014 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Show ukmap in /proc/*/map output

* /proc/*/map now displays uksmap mappings.


Revision tags: v4.0.1, v4.0.0, v4.0.0rc3, v4.0.0rc2, v4.0.0rc, v4.1.0, v3.8.2, v3.8.1, v3.6.3, v3.8.0, v3.8.0rc2, v3.9.0, v3.8.0rc, v3.6.2, v3.6.1, v3.6.0, v3.7.1, v3.6.0rc, v3.4.3
# f6a0c819 30-Jul-2013 Johannes Hofmann <johannes.hofmann@gmx.de>

kernel: add OBJT_MGTDEVICE in some more places


# dc71b7ab 31-May-2013 Justin C. Sherrill <justin@shiningsilence.com>

Correct BSD License clause numbering from 1-2-4 to 1-2-3.

Apparently everyone's doing it:
http://svnweb.freebsd.org/base?view=revision&revision=251069

Submitted-by: "Eitan Adler" <lists at eitanadl

Correct BSD License clause numbering from 1-2-4 to 1-2-3.

Apparently everyone's doing it:
http://svnweb.freebsd.org/base?view=revision&revision=251069

Submitted-by: "Eitan Adler" <lists at eitanadler.com>

show more ...


Revision tags: v3.4.2, v3.4.1, v3.4.0, v3.4.0rc, v3.5.0, v3.2.2, v3.2.1, v3.2.0, v3.3.0, v3.0.3, v3.0.2, v3.0.1, v3.1.0, v3.0.0
# 86d7f5d3 26-Nov-2011 John Marino <draco@marino.st>

Initial import of binutils 2.22 on the new vendor branch

Future versions of binutils will also reside on this branch rather
than continuing to create new binutils branches for each new version.


# 47538602 17-Nov-2011 Matthew Dillon <dillon@apollo.backplane.com>

kernel - more procfs work

* uiomove_frombuf() takes care of indexing uio_offset and checking its
range for us so we don't have to do it ourselves, clean up use cases
in procfs.

* Generate somew

kernel - more procfs work

* uiomove_frombuf() takes care of indexing uio_offset and checking its
range for us so we don't have to do it ourselves, clean up use cases
in procfs.

* Generate somewhat more consistent text output for /proc/<pid>/map by
formatting the map entry range with static widths.

* ps_nargvstr is a signed number, do a better range check on it.

show more ...


# 4643740a 15-Nov-2011 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Major signal path adjustments to fix races, tsleep race fixes, +more

* Refactor the signal code to properly hold the lp->lwp_token. In
particular the ksignal() and lwp_signotify() paths.

kernel - Major signal path adjustments to fix races, tsleep race fixes, +more

* Refactor the signal code to properly hold the lp->lwp_token. In
particular the ksignal() and lwp_signotify() paths.

* The tsleep() path must also hold lp->lwp_token to properly handle
lp->lwp_stat states and interlocks.

* Refactor the timeout code in tsleep() to ensure that endtsleep() is only
called from the proper context, and fix races between endtsleep() and
lwkt_switch().

* Rename proc->p_flag to proc->p_flags

* Rename lwp->lwp_flag to lwp->lwp_flags

* Add lwp->lwp_mpflags and move flags which require atomic ops (are adjusted
when not the current thread) to the new field.

* Add td->td_mpflags and move flags which require atomic ops (are adjusted
when not the current thread) to the new field.

* Add some freeze testing code to the x86-64 trap code (default disabled).

show more ...


# b12defdc 18-Oct-2011 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Major SMP performance patch / VM system, bus-fault/seg-fault fixes

This is a very large patch which reworks locking in the entire VM subsystem,
concentrated on VM objects and the x86-64 pma

kernel - Major SMP performance patch / VM system, bus-fault/seg-fault fixes

This is a very large patch which reworks locking in the entire VM subsystem,
concentrated on VM objects and the x86-64 pmap code. These fixes remove
nearly all the spin lock contention for non-threaded VM faults and narrows
contention for threaded VM faults to just the threads sharing the pmap.

Multi-socket many-core machines will see a 30-50% improvement in parallel
build performance (tested on a 48-core opteron), depending on how well
the build parallelizes.

As part of this work a long-standing problem on 64-bit systems where programs
would occasionally seg-fault or bus-fault for no reason has been fixed. The
problem was related to races between vm_fault, the vm_object collapse code,
and the vm_map splitting code.

* Most uses of vm_token have been removed. All uses of vm_spin have been
removed. These have been replaced with per-object tokens and per-queue
(vm_page_queues[]) spin locks.

Note in particular that since we still have the page coloring code the
PQ_FREE and PQ_CACHE queues are actually many queues, individually
spin-locked, resulting in very excellent MP page allocation and freeing
performance.

* Reworked vm_page_lookup() and vm_object->rb_memq. All (object,pindex)
lookup operations are now covered by the vm_object hold/drop system,
which utilize pool tokens on vm_objects. Calls now require that the
VM object be held in order to ensure a stable outcome.

Also added vm_page_lookup_busy_wait(), vm_page_lookup_busy_try(),
vm_page_busy_wait(), vm_page_busy_try(), and other API functions
which integrate the PG_BUSY handling.

* Added OBJ_CHAINLOCK. Most vm_object operations are protected by
the vm_object_hold/drop() facility which is token-based. Certain
critical functions which must traverse backing_object chains use
a hard-locking flag and lock almost the entire chain as it is traversed
to prevent races against object deallocation, collapses, and splits.

The last object in the chain (typically a vnode) is NOT locked in
this manner, so concurrent faults which terminate at the same vnode will
still have good performance. This is important e.g. for parallel compiles
which might be running dozens of the same compiler binary concurrently.

* Created a per vm_map token and removed most uses of vmspace_token.

* Removed the mp_lock in sys_execve(). It has not been needed in a while.

* Add kmem_lim_size() which returns approximate available memory (reduced
by available KVM), in megabytes. This is now used to scale up the
slab allocator cache and the pipe buffer caches to reduce unnecessary
global kmem operations.

* Rewrote vm_page_alloc(), various bits in vm/vm_contig.c, the swapcache
scan code, and the pageout scan code. These routines were rewritten
to use the per-queue spin locks.

* Replaced the exponential backoff in the spinlock code with something
a bit less complex and cleaned it up.

* Restructured the IPIQ func/arg1/arg2 array for better cache locality.
Removed the per-queue ip_npoll and replaced it with a per-cpu gd_npoll,
which is used by other cores to determine if they need to issue an
actual hardware IPI or not. This reduces hardware IPI issuance
considerably (and the removal of the decontention code reduced it even
more).

* Temporarily removed the lwkt thread fairq code and disabled a number of
features. These will be worked back in once we track down some of the
remaining performance issues.

Temproarily removed the lwkt thread resequencer for tokens for the same
reason. This might wind up being permanent.

Added splz_check()s in a few critical places.

* Increased the number of pool tokens from 1024 to 4001 and went to a
prime-number mod algorithm to reduce overlaps.

* Removed the token decontention code. This was a bit of an eyesore and
while it did its job when we had global locks it just gets in the way now
that most of the global locks are gone.

Replaced the decontention code with a fall back which acquires the
tokens in sorted order, to guarantee that deadlocks will always be
resolved eventually in the scheduler.

* Introduced a simplified spin-for-a-little-while function
_lwkt_trytoken_spin() that the token code now uses rather than giving
up immediately.

* The vfs_bio subsystem no longer uses vm_token and now uses the
vm_object_hold/drop API for buffer cache operations, resulting
in very good concurrency.

* Gave the vnode its own spinlock instead of sharing vp->v_lock.lk_spinlock,
which fixes a deadlock.

* Adjusted all platform pamp.c's to handle the new main kernel APIs. The
i386 pmap.c is still a bit out of date but should be compatible.

* Completely rewrote very large chunks of the x86-64 pmap.c code. The
critical path no longer needs pmap_spin but pmap_spin itself is still
used heavily, particularin the pv_entry handling code.

A per-pmap token and per-pmap object are now used to serialize pmamp
access and vm_page lookup operations when needed.

The x86-64 pmap.c code now uses only vm_page->crit_count instead of
both crit_count and hold_count, which fixes races against other parts of
the kernel uses vm_page_hold().

_pmap_allocpte() mechanics have been completely rewritten to remove
potential races. Much of pmap_enter() and pmap_enter_quick() has also
been rewritten.

Many other changes.

* The following subsystems (and probably more) no longer use the vm_token
or vmobj_token in critical paths:

x The swap_pager now uses the vm_object_hold/drop API instead of vm_token.

x mmap() and vm_map/vm_mmap in general now use the vm_object_hold/drop API
instead of vm_token.

x vnode_pager

x zalloc

x vm_page handling

x vfs_bio

x umtx system calls

x vm_fault and friends

* Minor fixes to fill_kinfo_proc() to deal with process scan panics (ps)
revealed by recent global lock removals.

* lockmgr() locks no longer support LK_NOSPINWAIT. Spin locks are
unconditionally acquired.

* Replaced netif/e1000's spinlocks with lockmgr locks. The spinlocks
were not appropriate owing to the large context they were covering.

* Misc atomic ops added

show more ...


Revision tags: v2.12.0, v2.13.0, v2.10.1, v2.11.0, v2.10.0, v2.9.1, v2.8.2, v2.8.1, v2.8.0, v2.9.0, v2.6.3, v2.7.3, v2.6.2, v2.7.2, v2.7.1, v2.6.1, v2.7.0, v2.6.0, v2.5.1, v2.4.1, v2.5.0, v2.4.0, v2.3.2, v2.3.1, v2.2.1, v2.2.0, v2.3.0, v2.1.1, v2.0.1
# c7e98b2f 19-Feb-2007 Simon Schubert <corecode@dragonflybsd.org>

1:1 Userland threading stage 2.18/4:

Push lwp use a bit further by making some places lwp aware.
This commit deals with ddb, procfs/ptrace and various consumers of
allproc_scan.


# f8c7a42d 20-Dec-2006 Matthew Dillon <dillon@dragonflybsd.org>

Rename sprintf -> ksprintf
Rename snprintf -> knsprintf

Make allowances for source files that are compiled for both userland and
the kernel.


# 1b874851 11-Sep-2006 Matthew Dillon <dillon@dragonflybsd.org>

Move flag(s) representing the type of vm_map_entry into its own vm_maptype_t
type. This is a precursor to adding a new VM mapping type for virtualized
page tables.


# ac424f9b 02-May-2004 Chris Pressey <cpressey@dragonflybsd.org>

Style(9) cleanup to src/sys/vfs, stage 15/21: procfs.

- Convert K&R-style function definitions to ANSI style.

Submitted-by: Andre Nathan <andre@digirati.com.br>
Additional-reformatting-by: cpressey


# 1f2de5d4 07-Aug-2003 Matthew Dillon <dillon@dragonflybsd.org>

kernel tree reorganization stage 1: Major cvs repository work (not logged as
commits) plus a major reworking of the #include's to accomodate the
relocations.

* CVS repository files manually move

kernel tree reorganization stage 1: Major cvs repository work (not logged as
commits) plus a major reworking of the #include's to accomodate the
relocations.

* CVS repository files manually moved. Old directories left intact
and empty (temporary).

* Reorganize all filesystems into vfs/, most devices into dev/,
sub-divide devices by function.

* Begin to move device-specific architecture files to the device
subdirs rather then throwing them all into, e.g. i386/include

* Reorganize files related to system busses, placing the related code
in a new bus/ directory. Also move cam to bus/cam though this may
not have been the best idea in retrospect.

* Reorganize emulation code and place it in a new emulation/ directory.

* Remove the -I- compiler option in order to allow #include file
localization, rename all config generated X.h files to use_X.h to
clean up the conflicts.

* Remove /usr/src/include (or /usr/include) dependancies during the
kernel build, beyond what is normally needed to compile helper
programs.

* Make config create 'machine' softlinks for architecture specific
directories outside of the standard <arch>/include.

* Bump the config rev.

WARNING! after this commit /usr/include and /usr/src/sys/compile/*
should be regenerated from scratch.

show more ...


12